Skip to content
BY 4.0 license Open Access Published by De Gruyter March 10, 2018

Cubic Ordered Weighted Distance Operator and Application in Group Decision-Making

  • Muhammad Shakeel , Saleem Abdullah EMAIL logo and Rehan Ahmed

Abstract

Group decision-making is a very useful technique for ranking the group of alternatives. The ordered weighted distance (OWD) operator is a new tool in group decision-making problems. In this paper, we apply the OWD operator on cubic information. We develop a new operator, the so-called cubic OWD (COWD) operator, and study the different properties of it. We also discuss some particular cases of COWD. Finally, we develop a general algorithm for group decision-making problems using the COWD operator and give an application to the group decision-making problem.

1 Introduction

The fuzzy set (FS) theory was introduced by Zadeh in 1965. FS has many applications in various fields, including engineering science, computer science, mathematics, and management science. A pacific model of real-world problems in various fields such as computer science, artificial intelligence, operation research, management science, control engineering, robotics, expert systems and many others may not be constructed because we have many uncertainties to solve these uncertainties. We need some natural tools such as the probability theory and the theory of FS [45] that have been developed already. For the concepts represented by FS and intuitionistic FS (IFs), an element with full membership (nonmembership) is usually much easier to be determined because of its categorical difference from other elements. Most existing distances based on the linear representation of IFs are linear in nature in the sense of being based on the relative difference between membership degrees.

FS discuss the case in which the membership is involved. IFs are the generalization of FS. The uncertainty problem does not explain by means of IFs. Therefore, Jun et al. defined the concept of cubic set. In 2012, Jun et al. defined a new theory known as the cubic set theory. This theory is able to deal with uncertain problems. The cubic set theory also explains the satisfied, unsatisfied, and uncertain information, whereas the FS theory and IFs fail to explain these terms. The cubic set is a generalization of FS and IFs. The cubic set is a collection of interval valued FS (IVFS) and FS, whereas IFs are only FS. The cubic set has more desirable information than the FS and IFs. Mahmood and Khan [15] defined cubic hesitant FS (CHFS) by combining interval valued HFS [4] and HFS [28] and defined some basic operations, and properties, of CHFS.

The concept of the neutrosophic set (NS) developed by Smarandache [26] and Chang Su Kim and Smarandache [27] is a more general platform that extends the concepts of the classic set and FS, IFs, and IVFS. The NS theory is applied to various parts that extend the concept of cubic sets to the NS. They introduced the notions of truth-internal (indeterminacy-internal, falsity-internal) neutrosophic cubic sets and truth-external (indeterminacy-external, falsity-external) neutrosophic cubic sets and investigated related properties such that the P-union and P-intersection of truth-internal (indeterminacy-internal, falsity-internal) neutrosophic cubic sets are also truth-internal (indeterminacy-internal, falsity-internal) neutrosophic cubic sets. Jun et al. [12] also worked on neutrosophic cubic sets and developed its various properties.

Decision-making in the real world is one of the most important and common activities. The main concept of the decision-making process is to rank or select the best alternative from the achievable alternatives [7], [8], [10], [22], [30], [46]. For addressing an individual, a single expert will not be able to give complete information. The individual failed to solve decision-making problems. In this regard, it requires more than one expert to provide decision-making. In group decision-making, every expert gives his/her judgment over a set of alternatives based on the criteria of each alternative. Then, it needs to aggregate those information related to the alternatives in a single decision matrix. Therefore, it needs some operators to aggregate each expert’s judgment information of each alternative.

In 1988, Yager introduced the ordered weighted operator and applied this concept in multiple-attribute group decision-making problems. The ordered weighted averaging (OWA) operator has a vital role in the decision-making theory. The reordering procedure is a vital property of the OWA operator. There are many applications of the OWA operator in different areas [1], [2], [3], [5], [6], [11], [16], [17], [18], [19], [20], [21], [29], [31], [36], [37], [40], [41], [42], [43], [44]. The distance measure plays a prominent role in the OWA. The OWA generalized using the distance measure used the idea of OWA operator, introduced the ordered weighted distance (OWD) operator, and also investigated some technique to find its weights. The important property of the OWD operator is that it can relax (intensify) the influence of unduly large or small deviations on the aggregation results by assigning them low (high) weights. This important property makes the OWA operator precisely appropriate to applied various areas, group decision-making, medical diagnoses, data mining, cluster selection, and pattern recognition. The OWD operator is an extension of different well-known distance measures.

Wei et al. [14], [32], [33], [34], [35] also worked on different aggregation operators such as the interval valued dual hesitant fuzzy linguistic geometric aggregation operators, interval valued hesitant fuzzy uncertain linguistic aggregation operators, and two-tuple linguistic aggregation operators. Zeng et al. [23], [24], [25] worked on different aggregation operators and also discussed the TOPSIS method.

Thus, with the advantage of the above-mentioned aggregation operators, we shall develop a cubic OWD (COWD) operator. This operator is very effective for the treatment of the data in the form of cubic numbers (CNs). The main advantage of the COWD operator is that it can alleviate the influence of unduly large (or small) deviations on the aggregation results by assigning them low (or high) weights. Moreover, it provides a robust formulation that includes a wide range of particular cases, such as the cubic max distance, cubic min distance, cubic normalized Hamming distance (CNHD), cubic normalized Euclidean distance (CNED), cubic normalized geometric distance (CNGD), cubic weighted Hamming distance (CWHD), cubic weighted Euclidean distance (CWED), cubic weighted geometric distance (CWGD), cubic ordered weighted Hamming distance (COWHD), cubic ordered weighted Euclidean distance (COWED), cubic ordered weighted geometric distance (COWGD), and generalized cubic OWA operator. Thus, the decision-maker is able to consider a wide range of scenarios and select the one that is in accordance with his interests.

This paper is arranged as follows. In Section 2, we review some aggregation operators and the OWD measures. In Section 3, we develop the COWD operator and study its various properties. In Section 4, we analyze different types of COWD operators. In Section 5, we briefly describe the decision-making process based on developed operators, and we give a numerical example in Section 6. In section 7, we discuss the proposed operator with intuitionistic fuzzy OWA (IFOWA) operator. Section 8 summarizes the main conclusions of the paper.

2 Preliminaries

In this section, we briefly review the OWA operator, the OWA distance (OWAD), and the OWD measure.

2.1 OWA Operator

The OWA operator introduced by Yager [39] provides a parameterized family of aggregation operators that include the maximum, minimum, and average criteria. The prominent advantages of the OWA operator is that the input data are rearranged in descending order and the weights associated with the OWA operator are the weights of the order positions of the input data rather than the weights of the input data. It can be defined as follows:

Definition 1: [39] An OWA operator of dimension n is a mapping OWA:RnR (R is the set of real number) that has an associated weighting vector wj and j=1nwj=1, j=1, 2, …, n such that the OWA is defined as follows:

(1) OWA(a1,,an)=j=1nwjbj,

where bj is the jth largest of the ai. From a generalized perspective of the reordering step, it is possible to distinguish between the descending OWA operator and the ascending OWA operator. The (OWA) operator is commutative, monotonic, bounded, and idempotent [39].

2.2 OWAD Operator

Recently, Merigó and Gil-Lafuente [20] introduced a new index for decision-making using the OWA operator to calculate the Hamming distance called the OWAD operator. For two real numbers sets A={a1, a2, …, an} and B={b1, b2, …, bn}, the OWAD operator can be defined as follows:

Definition 2: [20] An OWAD operator of dimension n is a mapping OWAD:Rn×RnR (R is the set of real numbers) that has an associated weighting wj with wj∈[0, 1] and j=1nwj=1 such that

(2) OWAD(C1,D1,C2,D2,,Cn,Dn)=j=1nwjdj,

where dj is the jth largest of the |cidi|. The OWAD operator is commutative, monotonic, bounded, and idempotent. The operator provides a parameterized family of aggregation operator ranging from the minimum to the maximum distance [20].

2.3 OWD Measure

Motivated by the idea of the OWA operator, Xu and Chen [38] developed an OWD measure that can be defined as follows:

Definition 3: [38] An OWD of dimension n is a mapping OWD:Rn×RnR that has an associated weighting vector wj with wj∈[0, 1], j=1, 2, …, n such that j=1nwj=1, according to the following formula:

(3) OWD(A,B)=(j=1nwj(d(aσ(j),bσ(j)))λ)1λλ>0,

where d(aj, bj)=|ajbj| is the distance between the real numbers aj and bj and [σ(1), σ(2), …, σ(n)] is any permutation of (1, 2, …, n) such that

(4) d(aσ(j1),bσ(j1))d(aσ(j),bσ(j)),j=2,,n.

Remark 1: If λ=1. Then, the OWD measure is reduced to the OWAD operator (2). However, the OWAD and the OWD are mainly used to aggregate or measure the data, taking the form of the exact numerical in what follows, we extend these OWD measures to accommodate the situation in which the input data is provided with CNs.

Definition 4: IVFS is defined to be a mapping from X to [I], where [I] is collection of all closed subintervals of [0, 1]. The collection of all interval value sets is denoted by [I]*. For any A∈[I]* and aX, the membership degree of an element aX is denoted by A(x)=[A̅(x), A+(x)], where A̅:X→[0, 1] and A+:X→[0, 1] are called the lower FS and upper FS in X, respectively.

Definition 5: [11] Let X be a fixed nonempty set. A cubic set is an object of the form:

(5) C˜={a,A(a),λ(a):aX},

where A is an IVFS and λ is an FS in X. A cubic set C˜=a,A(a),λ(a) is simply denoted by C˜=A˜,λ. The collection of all cubic set is denoted by C˜(X).

  1. If λA˜(x) for all xX, it is called the interval cubic set.

  2. If λA˜(x) for all xX, it is called the external cubic set.

  3. If λA˜(x) or λA˜(x), it is called the cubic set for all xX.

Definition 6: [11] Let A˜=A,λ and B˜=B,μ be the cubic sets in X such that

  1. (Equality) A˜=B˜A=B and λ=μ;

  2. (P-order) A˜pB˜AB and λμ;

  3. (R-order) A˜RB˜AB and λμ.

Definition 7: [11] The complement of A˜=A,λ is defined to be the cubic set

A˜c={x,Ac(x),1λ(x)|xX}.

3 COWD Measure

The OWA operator has a vital role in complex information. The importance of the OWA operator is that the input information is rearranging the order and the weights related to the OWA operator are the weights of the ordered position of the input data information. Rather than the weight of the input information distance and similarity measure of FS, an important research topic in the theory of FS has been studied by many authors [9], [13]. However, in the literature, there is no such study as an ordered weight measure between cubic sets. Motivated by the idea of the OWD measure and the OWAD operator, we develop an OWD between cubic sets, which can not only emphasize the importance of ordered position of each deviation value but also provide a parameterized family of distance aggregation operators between cubic sets. In this section, we study the distance between two cubic sets, particularly two CNs. We also define some (COWD) operators. We study the fundamental properties of these operators.

Definition 8: A real function D:C˜S(x)×C˜S(X)[0,1] is called a distance measure for cubic sets if D satisfies the following properties:

  1. D(C˜1,C˜2)=D(C˜2,C˜1), for all C˜1,C˜2C˜S(X),

  2. D(C˜1,C˜c)=1 iff C˜p(X),

  3. D(C˜1,C˜2)=0 iff C˜1=C˜2, for all C˜1,C˜2C˜S(X),

  4. If C˜1C˜2C˜3 , then D(C˜1,C˜2)D(C˜1,C˜3) and D(C˜2,C˜3)D(C˜1,C˜3)C˜1,C˜2,C˜3C˜S(X).

Definition 9: To measure the deviation between any two CNs C˜=[a1,a1+],λ1 and D˜=[a2,a2+],λ2, we define the following distance measure between two CNs.

Definition 10: Let C˜=[a1,a1+],λ1 and D˜=[a2,a2+],λ2 be any two CNs. Then, the distance between C˜ and D˜ is denoted by dcs(C˜,D˜) and defined as follows such that

(6) dcs(C˜,D˜)=13[|a1a2|+|a1+a2+|+|λ1λ2|],

is called cubic distance (C˜D˜) between two CNs. The cubic distance exhibits nonnegativity, commutativity, reflexivity, and triangle inequality. These properties can be demonstrated by the following theorem:

Theorem 1: For any three CNs C˜=A˜,λ,C˜1=A˜1,λ1, and C˜2=A˜2,λ2, then

  1. Nonnegativity: dcsC˜1,C˜20,

  2. Commutativity: dcsC˜1,C˜2=dcsC˜2,C˜1,

  3. Reflexivity: dcsC˜,C˜=0,

  4. Triangle inequality: dcsC˜1,C˜+dcsC˜,C˜2dcsC˜1,C˜2.

Proof. The proofs of properties 1–3 are straightforward. The proof of property 4 is given as follows:

Since

|a1a2+a1+a2+|=|a1a+a1+a++aa2+a+a2+|,|a1a+a1+a+|+|aa2+a+a2+|.Also|λ1λ2|=|λ1λ+λλ2||λ1λ|+|λλ2|13(|a1a2|+|a1+a2+|+|λ1λ2|)13(|a1a|+|a1+a+|+|λ1λ|)+13(|aa2|+|a+a2+|+|λλ2|),that is dcsC˜1,C˜2dcsC˜1,C˜+dcsC˜,C˜2,dcsC˜1,C˜+dcsC˜,C˜2dcsC˜1,C˜2.

This completes the proof.

3.1 COWD Operator

Based on the above information, let Cs(X) be the set of all CNs, and let C˜=(C1,C2,,Cn) and D˜=(D1,D2,,Dn) be the two sets of CNs. Then, we can define the cubic weighted distance (CWD) operator and COWD as follows. Let us denote Ω as the set of CWD operator.

Definition 11: A CWD operator of dimension n is a mapping CWD:Ωn×Ωn→[0, 1] that has an associated weighting vector W=(w1, w2, …, wn) with wj∈[0, 1] such that j=1nwj=1, according to the following formula:

(7) CWD(C˜,D˜)=(j=1nwj(dcs(C˜j,D˜j))λ)1λ,λ>0.

The CWD will bring us the additional advantages of the comparison between CNs. This is the reason that we adopt the CWD measure in this paper.

Definition 12: A COWD operator of dimension n is a mapping (COWD):Ωn×Ωn→[0, 1] that has an associated weighting vector W=(w1, w2, …, wn) with wj∈[0, 1] and j=1nwj=1 according to the following formula:

(8) COWD(C˜,D˜)=(j=1nwj(dcs(C˜σ(j),D˜σ(j)))λ)1λλ>0,

where [σ(1), σ(2), …, σ(n)] is any permutation of (1, 2, …, n) such that

(9) d(aσ(j1),bσ(j1))d(aσ(j),bσ(j)),j=2,,n.

The COWD operator is an extension of the OWD measure and the OWAD operator for situations where the available information cannot be assessed with exact numbers but it is possible to use CNs. It uses the main characteristics of the OWD measure and the OWAD operator. The main advantage of this operator is that it can relieve (or intensify) the influence of unduly large (or small) deviation on the aggregation results by assigning them low (or high) weights. An interesting issue is the determination of the weighting vectors associated with the OWD measures. In the literature, we find a lot of methods for determining the OWA weights that can also be implemented for the IFOWD operator, such as the Gaussian distribution-based method [36] and the least squares-based method. For determining the weighted method OWA [3], [5], [6], [11], [17], [19], [41], [42], [44], we give other three ways to determine the weighting vectors:

  1. Let

    (10) wj=dcs(C˜σ(j),D˜σ(j))j=1ndcs(Cσ(j),Dσ(j)),j=1,2,,n,

    then wj+1wj≥0, j=1, 2, …, n−1, and j=1nwj=1.

  2. Let

    (11) wj=edcs(C˜σ(j),D˜σ(j))j=1nedcs(Cσ(j),Dσ(j)),j=1,2,,n,

    then 0≤wj+1wj, j=1, 2, …, n−1, and j=1nwj=1.

  3. Let

    (12) dcs(C˜α(j),D˜α(j))=1nj=1ndcs(C˜α(j),D˜α(j)),

    and

    (13) d(dcs(C˜σ(j),D˜σ(j)),dcs(C˜σ(j),D˜σ(j)))= |dcs(C˜σ(j),D˜σ(j))dcs(C˜σ(j),D˜σ(j))|.

Then, we define

(14) wj=1d(dcs(C˜σ(j),D˜σ(j)),dcs(C˜σ(j),D˜σ(j)))j=1n(1d(dcs(C˜σ(j),D˜σ(j)),dcs(C˜σ(j),D˜σ(j))),=1|dcs(C˜σ(j),D˜σ(j))1nj=1ndcs(C˜σ(j),D˜σ(j))|j=1n(1|dcs(C˜σ(j),D˜σ(j))1nj=1ndcs(C˜σ(j),D˜σ(j))|),

then wj≥0, j=1, 2, …, n, and j=1nwj=1.

We find that the weight vector derived from Eq. (10) is a monotonic decreasing sequence, the weight vector derived from Eq. (11) is a monatomic increasing sequence, and the weight vector derived from Eq. (14) combines the above two cases, that is, the closer is the value dcs(C˜σ(j),D˜σ(j)) to the mean 1nj=1ndcs(C˜σ(j),D˜σ(j)), the larger is the weight wj. We take an example to explain the above formulas and find the different weights of Eqs. (10), (11), and (14), respectively.

Example 1: Let {a1, a2, a3} be a nonempty set and C˜ and D˜ are any two cubic sets defined as follows:

C˜={[0.3,0.4],0.5,[0.5,0.6],0.4,[0.8,0.9],0.9},D˜={[0.4,0.5],0.3,[0.2,0.4],0.2,[0.7,0.8],0.3}.

Then,

dcs(C˜1,D˜1)=13(|0.30.4|+|0.40.5|+|0.50.3|)=0.1333,

Similarly, we have

dcs(C˜2,D˜2)=0.2333,dcs(C˜3,D˜3)=0.2666,

then, we have

dcs(C˜σ(),D˜σ())=0.2666,dcs(C˜σ(),D˜σ())=0.2333,dcs(C˜σ(),D˜σ())=0.1333.

If we apply Eq. (10), then we can get the weighting vector w=(0.4210, 0.3684, 0.2105). Suppose λ=3 for three cases. Therefore, we calculate the distance between C˜ and D˜ using COWD operator, and we get the following result:

COWD(C~,D~)=[(0.4210(0.2666)3+0.3684(0.2333)3+0.2105(0.1333)3]13=0.2358.

If we use Eq. (11), then we find the weighting vector w=(0.3147, 0.3254, 0.3597) and calculate the COWD using Eq. (8) such that

COWD(C~,D~)=[(0.3147(0.2666)3+0.3254(0.2333)3+0.3597(0.1333)3]13=0.2214.

Similarly, using Eq. (14), we can find the weighting vector wj and j=1nwj=1. Hence, w=(0.3333, 0.3333, 0.3333) and we calculate the COWD using Eq. (8) such that

COWD(C~,D~)=[(0.3333(0.2666)3+0.3333(0.2333)3+0.3333(0.1333)3]13=0.1957.

The COWD operator is commutative, monotonic, bounded, idempotent, nonnegative, and reflexive, but it does not accomplish always the triangle inequality. These properties can be proven with the following theorems:

Theorem 2: [Commutativity (OWA) aggregation]. Assuming that f is the (COWD) operator, then

f((C˜1,D˜1),,(C˜n,D˜n))=f((C˜1/,D˜1/),,(C˜n/,D˜n/)),

where ((C˜1,D˜1),,(C˜n,D˜n)) is any permutation of the arguments ((C˜1/,D˜1/),,(C˜n/,D˜n/)).

Proof. Let

f((C˜1,D˜1),,(C˜n,D˜n))=(j=1nwj(dcs(C˜σ(j),D˜σ(j)))λ)1λf((C˜1/,D˜1/),,(C˜n/,D˜n/))=(j=1nwj(dcs(C˜σ(j)/,D˜σ(j)/))λ)1λ

Because ((C˜1,D˜1),,(C˜n,D˜n)) is the permutation of the arguments

((C˜1/,D˜1/),,(C˜n/,D˜n/)) and we havedcs(C˜σ(j),D˜σ(j))=dcs(C˜σ(j)/,D˜σ(j)/) for all j, and thenf((C˜1,D˜1),,(C˜n,D˜n))=f((C˜1/,D˜1/),,(C˜n/,D˜n/)).

Note that the commutativity of the COWD can also be studied from the context of a distance measure, which can be proven with the following theorem:

Theorem 3: (Commutativity-distance measure) Assuming that f is the COWD operator, then

f((C˜1,D˜1),,(C˜n,D˜n))=f((D˜1,C˜1),,(D˜n,C˜n)).

Proof. Let

f((C˜1,D˜1),,(C˜n,D˜n))=(j=1nwj(dcs(C˜σ(j),D˜σ(j)))λ)1λf((D˜1,C˜1),,(D˜n,C˜n))=(j=1nwj(dcs(D˜σ(j),C˜σ(j)))λ)1λ

Because

dcs(C˜j,D˜j)=dcs(D˜j,C˜j) for all j, thendcs(C˜α(j),D˜α(j))=dcs(D˜α(j),C˜α(j)) j=1,2,,n

Theorem 4: (Monotonicity). Assuming that f is the COWD operator, let M=(γ1, γ2, …, γn) be the set of CNs if dcs(C˜j,D˜j)dcs(C˜j,γj) for all j. Then,

f((C˜1,D˜1),,(C˜n,D˜n))f((C˜1,γ1),,(C˜n,γn)).

Proof. Let

f((C˜1,D˜1),,(C˜n,D˜n))=(j=1nwj(dcs(C˜σ(j),D˜σ(j)))λ)1λ,f((C˜1,γ1),,(C˜n,γn))=(j=1nwj(dcs(C˜σ(j),γσ(j)))λ)1λ.

Because

dcs(C˜j,D˜j)dcs(C˜j,γj) for all j, thendcs(C˜σ(j),D˜σ(j))dcs(C˜σ(j),γσ(j)),j=1,2,,n.

Theorem 5: (Boundary). Assuming that f is the (COWD) operator. Then,

minj|C˜jD˜j|f((C˜1,D˜1),,(C˜n,D˜n))maxj|CjDj|.

Proof. Let maxj |C˜jD˜j| =r and minj |C˜jD˜j|=s, where j=1nwj=1. Then,

f((C˜1,D˜1),,(C˜n,D˜n))=(j=1nwj(dcs(C˜σ(j),D˜σ(j)))λ)1λ(j=1nwjrλ))1λ=(rλ(j=1nwj))1λ=r,f((C˜1,D˜1),,(C˜n,D˜n))=(j=1nwj(dcs(C˜σ(j),D˜σ(j)))λ)1λ,(j=1nwjsλ)1λ=(sλj=1nwj)1λ=s.

Theorem 6: (Idempotency) Assume that f is the COWD operator if |C˜jD˜j| =d for all j. Then, f((C˜1,D˜1),,(C˜n,D˜n))=d.

Proof. Let

f((C˜1,D˜1),,(C˜n,D˜n))=(j=1nwj(dcs(C˜σ(j),D˜σ(j)))λ)1λ.

Because dcs(C˜j,D˜j)=d for all j, then dcs(C˜σ(j),D˜σ(j))=d, j=1, 2, …, n.

Theorem 7: (Nonnegativity) Assuming that f is the COWD operator, then

f((C˜1,D˜1),,(C˜n,D˜n))0.

Proof. It is straightforward and thus omitted. ■

Theorem 8: (Reflexivity). Assuming that f is the COWD operator, then

f((C˜1,C˜1),,(C˜n,C˜n))=0.

Proof. Let

f((C˜1,C˜1),,(C˜n,C˜n))=(j=1nwj(dcs(C˜σ(j),C˜σ(j)))λ)1λ.

4 Families of COWD Operators

Using a different manifestation in the weighting vector w and parameter λ, we are able to obtain a wide range of particular types of the COWD operator. The selection of the particular case (or other cases) found in the COWD depends on the particular interest of the decision-maker in the specific problem considered.

4.1 Analyzing the Weighting Vector w

By choosing a different manifestation of the weighting vector in the COWD operator, we are able to obtain different types of distance measures, such as the cubic maximum distance, cubic minimum distance, cubic normalized distance (CND), CWD, step COWD, median COWD, olympic COWD, and centered COWD.

Remark 2: For example, the cubic maximum distance, cubic minimum distance, step COWD, CND, and CWD are obtained as follows:

  1. The cubic maximum distance is found if w1=1 and wj=0 for j≠1.

  2. The cubic minimum distance if wn=1 and wj=0 for all jn.

  3. More generally, if wk=1 and wj=0 for all jk, we get the step COWD operator.

  4. The CND is formed when wj=1n for all j.

  5. The CWD is obtained when the ordered position of dcs(C˜j,D˜j) is the same as the ordered position of the dcs(C˜σ(j),D˜σ(j)).

4.1.1 Analyzing the Parameter λ

Remark 3: If λ=1, then the COWD measure is reduced to the COWHD operator:

(15) COWHD(C˜,D˜)=j=1nwj(dcs(C˜σ(j),D˜σ(j)),

where [σ(1), σ(2), …, σ(n)] is any permutation of (1, 2, …, n). Note that if wj=1n for all j, we get CNHD. The CWHD is obtained if the ordered position of the dcs(C˜j,D˜j) is the same as the ordered position of the dcs(C˜σ(j),D˜σ(j)).

Remark 4: If λ=2, the COWD operator is reduced to the COWED operator such that

(16) COWED(C˜,D˜)=j=1nwj(dcs(C˜σ(j),D˜σ(j))2,

where [σ(1), σ(2), …, σ(n)] is any permutation of (1, 2, …, n). Note that if wj=1n for all j, we get CNED. The CWED is obtained if the ordered position of the dcs(C˜j,D˜j) is the same as the ordered position of the dcs(C˜σ(j),D˜σ(j)).

Remark 5: If λ→0, then the COWD measure is reduced to the COWGD operator such that

(17) COWGD(C˜,D˜)=j=1n(dcs(C˜σ(j),D˜σ(j)))wj,

where [σ(1), σ(2), …, σ(n)] is any permutation of (1, 2, …, n). Note that if wj=1n for all j, we get CNGD. The CWGD is obtained if the ordered position of the dcs(C˜j,D˜j) is the same as the ordered position of the dcs(C˜σ(j),D˜σ(j)). Note that the COWGD can only be used sometimes when all individual distances are different from 0, that is, when dcs(C˜j,D˜j)0 for all j.

5 Multiple Attribute Group Decision-Making With the COWD Operator

In this section, we consider a decision-making application in the selection of investments under uncertainty. Let A={A1, A2, A, …, Am} be a discrete set of alternatives and C={C1, C2, …, Cn} be the set of attributes (or characteristics). Let E={e1, e2, …, et} be the set of the decision-maker, whose weighting vector is

V=(v1,v2,,vt),vk0. k=1tvk=1.

Each decision-maker provides his own pay off matrix (Xhi(k))m×n. Moreover, according to their objectives, the decision maker establishes a collective ideal investment for the company using cubic subsets as shown in Table 1, where I is the ideal strategy expressed by cubic subset and C˜i is the ith characteristic to consider. Then, based on the COWD operator, we propose a method for group decision-making under cubic environment that involves the following steps.

Table 1:

Ideal Strategy.

C1 C2 Ci Cn
I X1 X2 Xi Xn

Step 1: In this step, we apply the cubic weighted averaging (CWA) operator such that

CWA(c1,c2,, cn)=[(1j=1n(1a¯cjwj)),(1j=1n(1a+cjwj))],1(1j=1n(1(1λcjwj).

Step 2: Calculate the weighting vector W to be used in the aggregation such that

j=1nwj=1 and wj[0,1].

Step 3: Calculate the distance between the ideal investment with the aggregated results using the COWD operator. Note that it is possible to consider a wide range of COWD operators, such as those described in Sections 3 and 4.

Step 4: With the aggregated distance measure (Aj)=j=1nwjdj in this step, we find the weighting vector W=(w1, w2, …, wj) of each distance operator using the following equation:

wj=j=1ndjjj=1mj=1ndjj.

Now, we find the aggregated distance measure using the following equation:

Dj=Dagg(d1j,d2j,,dmj)=j=1mwjdj.

Step 5: Adopt decisions according to the results found in the previous steps. Select the alternatives that provide the best results. Moreover, establish an ordering or ranking of the alternatives from the most to the least preferred alternative to enable the consideration of more than one selection.

6 Numerical Example

In the following, we are going to develop a numerical example of the new approach. We analyze the results obtained using different types of COWD operators and we see that depending on the aggregate operator used the decision may be different. The COWD operator may be applied in similar problems as the OWD measure and the OWAD operator. Assume that a decision-maker wants to invest money in a company. After analyzing the market, he considers six possible alternatives:

  1. Invest in a chemical company called A1;

  2. Invest in a food company called A2;

  3. Invest in a computer company called A3;

  4. Invest in a car company called A4;

  5. Invest in a furniture company called A5;

  6. Invest in a pharmaceutical company called A6.

After careful review of the information, the group of experts establishes the following general information about the investments. They summarize the information of the investment in six general characteristics:

  1. C1: benefits in the short term;

  2. C2: benefits in the mid term;

  3. C3: benefits in the long term;

  4. C4: risk of the investment;

  5. C5: difficulty of the investment;

  6. C6: other factor.

The group of company experts is constituted by three persons, each offering their own opinions regarding the results obtained with each investment. The results are shown in Tables 24. The results are represented in CNs xij=(C˜Xij,D˜Xij), where CXij denotes the degree of the alternative Ai that satisfies the situation Cj and DXij denotes the degree that does not satisfy the situation Cj. According to their objectives, the company establishes the following collective, ideal investment shown in Table 5. With this information, we can make an aggregation to make a decision. First, we aggregate the information of the three experts to obtain a unified payoff matrix. We use the CWA operator to obtain this matrix while we assume that V=(0.3, 0.3, 0.4). The results are shown in Table 6. It is now possible to develop different methods based on the COWD operator for the selection of an investment. In this example, we consider the cubic maximum distance, cubic minimum distance, CWHD, CWED, COWHD operator, COWED operator, and COWGD operator. For convenience, we assume the following weighting vector W=(0.06, 0.05, 0.28, 0.26, 0.17, 0.18). The results are shown in Table 7. As we can see, for most of the cases, the best alternative is A2 because it seems to be the one with the lowest distance to the ideal investment. However, for some particular situation, we may find another optimal choice. Therefore, it is of interest to establish an ordering of the investment for each particular case. Note that the best choice is the one with the lowest distance. The results are shown in Table 8. As we can see, depending on the distance aggregation operator used, the ordering of the strategies is different. Therefore, depending on the distance aggregation operator used, the results may lead to different decisions.

Table 2:

Characterization of the Investments of Expert 1.


C1
C2
C3
A1 〈[0.5, 0.7], 0.3〉 〈[0.4, 0.6], 0.5〉 〈[0.6, 0.7], 0.4〉
A2 〈[0.5, 0.6], 0.2〉 〈[0.6, 0.7], 0.3〉 〈[0.3, 0.5], 0.9〉
A3 〈[0.7, 0.8], 0.5〉 〈[0.3, 0.6], 0.5〉 〈[0.5, 0.6], 0.1〉
A4 〈[0.3, 0.6], 0.9〉 〈[0.3, 0.5], 0.8〉 〈[0.7, 0.9], 0.3〉
A5 〈[0.3, 0.4], 0.6〉 〈[0.1, 0.3], 0.7〉 〈[0.3, 0.4], 0.5〉
A6 〈[0.5, 0.6], 0.3〉 〈[0.3, 0.4], 0.3〉 〈[0.6, 0.7], 0.3〉
C4
C5
C6
A1 〈[0.1, 0.3], 0.6〉 〈[0.6, 0.9], 0.1〉 〈[0.8, 0.9], 0.3〉
A2 〈[0.6, 0.8], 0.3〉 〈[0.3, 0.4], 0.6〉 〈[0.3, 0.5], 0.6〉
A3 〈[0.7, 0.9], 0.2〉 〈[0.3, 0.7], 0.1〉 〈[0.3, 0.7], 0.5〉
A4 〈[0.5, 0.6], 0.3〉 〈[0.1, 0.2], 0.3〉 〈[0.6, 0.8], 0.3〉
A5 〈[0.8, 0.9], 0.3〉 〈[0.6, 0.7], 0.2〉 〈[0.2, 0.3], 0.9〉
A6 〈[0.3, 0.7], 0.9〉 〈[0.1, 0.5], 0.3〉 〈[0.3, 0.9], 0.6〉
Table 3:

Characterization of the Investments of Expert 2.

C1
C2
C3
A1 〈[0.4, 0.6], 0.3〉 〈[0.5, 0.6], 0.2〉 〈[0.5, 0.8], 0.3〉
A2 〈[0.5, 0.7], 0.4〉 〈[0.3, 0.4], 0.5〉 〈[0.3, 0.6], 0.9〉
A3 〈[0.8, 0.9], 0.6〉 〈[0.5, 0.6], 0.3〉 〈[0.6, 0.7], 0.5〉
A4 〈[0.6, 0.9], 0.3〉 〈[0.1, 0.5], 0.4〉 〈[0.3, 0.6], 0.7〉
A5 〈[0.3, 0.5], 0.6〉 〈[0.5, 0.7], 0.1〉 〈[0.4, 0.6], 0.1〉
A6 〈[0.2, 0.7], 0.3〉 〈[0.3, 0.4], 0.5〉 〈[0.1, 0.2], 0.5〉
C4
C5
C6
A1 〈[0.6, 0.8], 0.5〉 〈[0.3, 0.4], 0.3〉 〈[0.6, 0.7], 0.5〉
A2 〈[0.5, 0.9], 0.3〉 〈[0.5, 0.8], 0.6〉 〈[0.3, 0.5], 0.6〉
A3 〈[0.6, 0.7], 0.3〉 〈[0.7, 0.8], 0.3〉 〈[0.3, 0.6], 0.7〉
A4 〈[0.3, 0.8], 0.4〉 〈[0.3, 0.7], 0.5〉 〈[0.2, 0.4], 0.6〉
A5 〈[0.1, 0.6], 0.9〉 〈[0.2, 0.4], 0.7〉 〈[0.4, 0.5], 0.2〉
A6 〈[0.3, 0.5], 0.3〉 〈[0.2, 0.8], 0.3〉 〈[0.1, 0.3], 0.6〉
Table 4:

Characterization of the Investments of Expert 3.

C1
C2
C3
A1 〈[0.2, 0.3], 0.6〉 〈[0.5, 0.7], 0.2〉 〈[0.1, 0.3], 0.9〉
A2 〈[0.5, 0.7], 0.3〉 〈[0.3, 0.9], 0.8〉 〈[0.3, 0.6], 0.8〉
A3 〈[0.6, 0.8], 0.5〉 〈[0.6, 0.7], 0.5〉 〈[0.5, 0.7], 0.1〉
A4 〈[0.3, 0.4], 0.1〉 〈[0.6, 0.8], 0.3〉 〈[0.7, 0.8], 0.3〉
A5 〈[0.2, 0.8], 0.3〉 〈[0.5, 0.6], 0.4〉 〈[0.3, 0.4], 0.9〉
A6 〈[0.1, 0.3], 0.5〉 〈[0.6, 0.7], 0.3〉 〈[0.8, 0.9], 0.6〉
C4
C5
C6
A1 〈[0.6, 0.8], 0.6〉 〈[0.3, 0.4], 0.9〉 〈[0.1, 0.6], 0.3〉
A2 〈[0.5, 0.7], 0.6〉 〈[0.2, 0.3], 0.6〉 〈[0.5, 0.6], 0.2〉
A3 〈[0.3, 0.8], 0.3〉 〈[0.6, 0.7], 0.1〉 〈[0.2, 0.5], 0.5〉
A4 〈[0.5, 0.9], 0.5〉 〈[0.3, 0.6], 0.9〉 〈[0.3, 0.6], 0.2〉
A5 〈[0.6, 0.9], 0.7〉 〈[0.3, 0.8], 0.1〉 〈[0.2, 0.5], 0.6〉
A6 〈[0.3, 0.7], 0.8〉 〈[0.2, 0.5], 0.6〉 〈[0.3, 0.5], 0.8〉
Table 5:

Collective Ideal Strategy.


C1
C2
C3
Y 〈[0.3, 0.4], 0.5〉 〈[0.6, 0.7], 0.2〉 〈[0.5, 0.7], 0.6〉
C4
C5
C6
Y 〈[0.8, 0.9], 0.3〉 〈[0.3, 0.4], 0.1〉 〈[0.3, 0.6], 0.2〉
Table 6:

Collective Result.


C1
C2
C3
A1 〈[0.3628, 0.5410], 0.3958〉 〈[0.4719, 0.6439], 0.2632〉 〈[0.4085, 0.6267], 0.5075〉
A2 〈[0.5002, 0.6731], 0.2895〉 〈[0.4082, 0.7620], 0.5176〉 〈[0.3000, 0.5723], 0.8585〉
A3 〈[0.7021, 0.8376], 0.5281〉 〈[0.4941, 0.6435], 0.4289〉 〈[0.5324, 0.6730], 0.1620〉
A4 〈[0.4082, 0.6879], 0.2687〉 〈[0.3966, 0.6534], 0.4389〉 〈[0.6131, 0.8000], 0.3868〉
A5 〈[0.2617, 0.6340], 0.4547〉 〈[0.4037, 0.5660], 0.3121〉 〈[0.3316, 0.4688], 0.3902〉
A6 〈[0.2719, 0.5411], 0.3680〉 〈[0.4404, 0.5453], 0.3496〉 〈[0.6134, 0.7405], 0.4614〉
C4
C5
C6
A1 〈[0.4899, 0.7087], 0.5680〉 〈[0.4082, 0.6495], 0.3348〉 〈[0.5606, 0.7579], 0.3496〉
A2 〈[0.5327, 0.8089], 0.3958〉 〈[0.3325, 0.5410], 0.6000〉 〈[0.3881, 0.5427], 0.3866〉
A3 〈[0.5411, 0.8165], 0.2656〉 〈[0.5660, 0.7343], 0.1390〉 〈[0.2618, 0.5988], 0.5531〉
A4 〈[0.4469, 0.8134], 0.4011〉 〈[0.2452, 0.5483], 0.5426〉 〈[0.3840, 0.6331], 0.3140〉
A5 〈[0.5826, 0.8484], 0.5853〉 〈[0.3840, 0.6859], 0.2207〉 〈[0.2662, 0.4469], 0.4873〉
A6 〈[0.3000, 0.6503], 0.6175〉 〈[0.1713, 0.6202], 0.3958〉 〈[0.2453, 0.6587], 0.6731〉
Table 7:

Aggregated Result.

Max Min COWHD COWED COWGD COWHD CWED
A1 0.2564 0.0824 0.1194 0.1443 0.1233 0.1675 0.1815
A2 0.2279 0.1106 0.1593 0.1619 0.1543 0.1592 0.1630
A3 0.2892 0.1256 0.1283 0.1700 0.1609 0.1629 0.1682
A4 0.2152 0.0770 0.1413 0.1459 0.1362 0.1533 0.1595
A5 0.2031 0.1058 0.1520 0.1543 0.1499 0.1745 0.1757
A6 0.2149 0.975 0.1279 0.1330 0.1235 0.1401 0.1486
Table 8:

Ordering of the Strategies.

Max A5> A6> A4> A2> A1> A3
Min A4> A1> A6> A5> A2> A3
COWHD A1> A6> A3> A4> A5> A2
COWED A6> A1> A4> A5> A2> A3
COWGD A1> A6> A4> A5> A2> A3
COWHD A6> A4> A2> A3> A1> A5
CWED A6> A4> A2> A3> A5> A1
Table 9:

Final Result.

Weight (w) Aggregated distance measure
w1=0.2149 d(A1)=0.1658
w2=0.0915 d(A2)=0.1696
w3=0.1226 d(A3)=0.1837
w4=0.1389 d(A4)=0.1557
w5=0.1295 d(A5)=0.1656
w6=0.1462 d(A6)=0.1488
w7=0.1522

The ranking of these alternatives as the minimum aggregated distance of alternative will be the best alternative such that A6>A4>A5>A1>A2>A3 (see Table 9).

7 Further Discussion

To show the validity and effectiveness of the proposed methods, we use intuitionistic fuzzy numbers to solve the same problem described above. We apply the proposed aggregation operators developed in this paper. After simplification, we got the ranking result A6>A4>A5>A1>A2>A3. A6 is the best alternative. In the above example, we use intuitionistic fuzzy numbers to express the decision maker’s evaluations. Shouzhen [22] proposed the IFOWA operator to deal with multiple attribute group decision-making with intuitionistic fuzzy information such that

Table 1:

Characterization of the Investments of Expert 1.

C1 C2 C3 C4 C5 C6
A1 (0.5, 0.3) (0.4, 0.5) (0.6, 0.4) (0.1, 0.6) (0.1, 0.6) (0.8, 0.3)
A2 (0.5, 0.2) (0.6, 0.3) (0.3, 0.9) (0.6, 0.3) (0.3, 0.6) (0.3, 0.6)
A3 (0.7, 0.5) (0.3, 0.5) (0.5, 0.1) (0.7, 0.2) (0.3, 0.1) (0.3, 0.5)
A4 (0.3, 0.9) (0.3, 0.8) (0.7, 0.3) (0.5, 0.3) (0.1, 0.3) (0.6, 0.3)
A5 (0.3, 0.6) (0.1, 0.7) (0.3, 0.5) (0.8, 0.3) (0.6, 0.2) (0.2, 0.9)
A6 (0.5, 0.3) (0.3, 0.3) (0.6, 0.3) (0.3, 0.9) (0.1, 0.3) (0.3, 0.6)
Table 2:

Characterization of the Investments of Expert 2.

C1 C2 C3 C4 C5 C6
A1 (0.4, 0.3) (0.5, 0.2) (0.5, 0.3) (0.6, 0.5) (0.3, 0.3) (0.6, 0.5)
A2 (0.5, 0.4) (0.3, 0.5) (0.3, 0.9) (0.5, 0.3) (0.5, 0.6) (0.3, 0.6)
A3 (0.8, 0.6) (0.5, 0.3) (0.6, 0.5) (0.6, 0.3) (0.7, 0.3) (0.3, 0.7)
A4 (0.6, 0.3) (0.1, 0.4) (0.3, 0.7) (0.3, 0.4) (0.3, 0.5) (0.2, 0.6)
A5 (0.3, 0.6) (0.5, 0.1) (0.4, 0.1) (0.1, 0.9) (0.2, 0.7) (0.4, 0.2)
A6 (0.2, 0.3) (0.3, 0.5) (0.1, 0.5) (0.3, 0.3) (0.2, 0.3) (0.1, 0.6)
Table 3:

Characterization of the Investments of Expert 3.

C1 C2 C3 C4 C5 C6
A1 (0.2, 0.6) (0.5, 0.2) (0.1, 0.9) (0.6, 0.6) (0.3, 0.9) (0.1, 0.3)
A2 (0.5, 0.3) (0.3, 0.8) (0.3, 0.8) (0.5, 0.6) (0.2, 0.6) (0.5, 0.2)
A3 (0.6, 0.5) (0.6, 0.5) (0.5, 0.1) (0.3, 0.3) (0.6, 0.1) (0.2, 0.5)
A4 (0.3, 0.1) (0.6, 0.3) (0.7, 0.3) (0.5, 0.5) (0.3, 0.9) (0.3, 0.2)
A5 (0.2, 0.3) (0.5, 0.4) (0.3, 0.9) (0.6, 0.7) (0.3, 0.1) (0.2, 0.6)
A6 (0.1, 0.5) (0.6, 0.3) (0.8, 0.6) (0.3, 0.8) (0.2, 0.6) (0.3, 0.8)
Table 4:

Collective Result.

C1 C2 C3 C4 C5 C6
A1 (0.36, 0.39) (0.74, 0.26) (0.40, 0.50) (0.48, 0.56) (0.40, 0.33) (0.56, 0.34)
A2 (0.50, 0.29) (0.41, 0.51) (0.30, 0.85) (0.53, 0.39) (0.33, 0.60) (0.85, 0.38)
A3 (0.70, 0.52) (0.49, 0.42) (0.53, 0.16) (0.54, 0.26) (0.56, 0.13) (0.26, 0.55)
A4 (0.40, 0.26) (0.39, 0.43) (0.61, 0.38) (0.44, 0.40) (0.24, 0.54) (0.38, 0.31)
A5 (0.26, 0.45) (0.40, 0.31) (0.33, 0.39) (0.58, 0.58) (0.38, 0.22) (0.26, 0.48)
A6 (0.72, 0.36) (0.44, 0.34) (0.61, 0.46) (0.30, 0.61) (0.71, 0.39) (0.24, 0.67)

We further explain to find the best alternative of intuitionistic fuzzy numbers. After the computation process, the overall collective values Ai (i=1, 2, 3, 4, 5, 6) are as follows:

Table 5:

Aggregated and Ranking Result.

A1 (0.43, 0.41)=S(A1)=0.01
A2 (0.55, 0.51)=S(A2)=0.04
A3 (0.53, 0.24)=S(A3)=0.29
A4 (0.47, 0.38)=S(A4)=0.09
A5 (0.45, 0.40)=S(A5)=0.05
A6 (0.52, 0.49)=S(A6)=0.03

Now, we find the ranking A1>A6>A2>A5>A4>A3. In this case, A1 is best alternative.

It is noted that the ranking orders obtained by this paper and by Shouzhen [22] are very different, where weakness is the ability of information representation of intuitionistic fuzzy numbers. Therefore, CNs may reflect better the decision information than intuitionistic fuzzy numbers under the real decision-making problem. Hence, our proposed approach is more better than the IFOWA operator.

7.1 Comparison

We compare the results between the COWD operator and the IFOWA operator.

Comparison table of the COWD and IFOWA operators

COWD Distance measure Ranking (1)
d(A1) 0.1658 4
d(A2) 0.1696 5
d(A3) 0.1837 6
d(A4) 0.1557 2
d(A5) 0.1656 3
d(A6) 0.1488 1
IFOWA Score function Ranking (2) Final ranking
S(A1) 0.01 1 4
S(A2) 0.04 3 5
S(A3) 0.29 6 6
S(A4) 0.09 5 2
S(A5) 0.05 4 3
S(A6) 0.03 2 1

When we compare both results of the COWD and IFOWA operators using aggregated distance measure and score function, the best result is the COWD operator, which is shown in the above table.

8 Conclusions

In this paper, we have suggested the COWD operator, which is very useful to deal with the decision information represented in CN under uncertain situations. The main advantage of the COWD operator is that it can alleviate the influence of the unduly large (or small) deviations on the aggregation results by assigning them low (or high) weights. Moreover, it provides a parameterized family of aggregation operators and distance measures. We have given three ways to determine the associated weighting vectors and studied some of its main properties and particular cases. With the relationship between distance measures and similarity measures, the corresponding ordered weighted similarity measures for cubic sets has been obtained. The COWD operator can be applied in many situations already considered with the Hamming and Euclidean distances such as in statistics, economics and engineering, decision theory, and soft computing. In this paper, we have focused on an application in a group decision-making problem regarding the selection of investments. We have seen that this approach provides better information for decision making because it is able to consider a wide range of scenarios depending on the interests of the decision-maker. In future research, we expect to develop further extensions by adding new characteristics in the problem such as the use of inducing variables or probabilistic aggregations.

Bibliography

[1] B. S. Ahn, Some remarks on the LSOWA approach for obtaining OWA operator weights, Int. J. Intell. Syst. 24 (2009), 1265–1279.10.1002/int.20384Search in Google Scholar

[2] B. S. Ahn and H. Park, Least-squared ordered weighted averaging operator weights, Int. J. Intell. Syst. 23 (2008), 33–49.10.1002/int.20257Search in Google Scholar

[3] G. Beliakov, Learning weights in the generalized OWA operators, Fuzzy Optim. Decis. Making 4 (2005), 119–130.10.1007/s10700-004-5868-3Search in Google Scholar

[4] N. Chen, Z. S. Xu and M. M. Xia, Interval valued hesitant preference relations and their applications to group decision making, Knowl. Based Syst. 37 (2013), 528–540.10.1016/j.knosys.2012.09.009Search in Google Scholar

[5] C. H. Cheng, J. W. Wang and M. C. Wu, OWA-weighted based clustering method for classification problem, Expert Syst. Appl. 36 (2009), 4988–4995.10.1016/j.eswa.2008.06.013Search in Google Scholar

[6] F. Herrera, E. Herrera-Viedma and F. Chiclana, A study of the origin and uses of the ordered weighted geometric operator in multicriteria decision making, Int. J. Intell. Syst. 18 (2003), 689–707.10.1002/int.10106Search in Google Scholar

[7] F. Herrera, L. Martinez and P. J. Sánchez, Managing nonhomogeneous information in group decision making, Eur. J. Oper. Res. 166 (2005), 115–132.10.1016/j.ejor.2003.11.031Search in Google Scholar

[8] E. Herrera-Viedma and J. L. Garcia-Lapresta, Information fusion in consensus and decision making, Inf. Fusion 17 (2014), 2–3.10.1016/j.inffus.2013.05.005Search in Google Scholar

[9] W. L. Hung and M.S. Yang, Similarity measures of intuitionistic fuzzy sets based on Hausdorff distance, Pattern Recogn. Lett. 25 (2004), 1603–1611.10.1016/j.patrec.2004.06.006Search in Google Scholar

[10] C. L. Hwang and K. Yoon, Multiple Attribute Decision Making Methods and Applications A State-of-the-Art Survey, Springer-Verlag, Berlin Heidelberg, 1981.10.1007/978-3-642-48318-9_3Search in Google Scholar

[11] Y. B. Jun, C. S. Kim and K. O. Yang, Cubic sets, Ann. Fuzzy Math. Inf. 4 (2012), 83–98.Search in Google Scholar

[12] Y. B. Jun, F. Smarandache and C. S. Kim, Neutrosophic cubic sets, N. Math. Nat. Comput. 13 (2017), 41–54.10.1142/S1793005717500041Search in Google Scholar

[13] X. Liu, A general model of parameterized OWA aggregation with given Orness level, Int. J. Approx. Reason. 48 (2008), 598–627.10.1016/j.ijar.2007.11.003Search in Google Scholar

[14] M. Lu and G. W. Wei, Models for multiple attribute decision making with dual hesitant fuzzy uncertain linguistic information, Int. J. Knowl. Based Intell. Eng. Syst. 20 (2016), 217–227.10.3233/KES-160349Search in Google Scholar

[15] T. Mahmood and Q. Khan, Cubic hesitant fuzzy sets and their applications to multi criteria decision making, Int. J. Algebra Stat. 5 (2016), 19–51.10.20454/ijas.2016.1055Search in Google Scholar

[16] J. M. Merigó and A. M. Gil-Lafuente, The ordered weighted averaging distance operator, Lect. Modell. Simul. 8 (2007), 1–11.Search in Google Scholar

[17] J. M. Merigó and A. M. Gil-Lafuente, Using the OWA operator in the Minkowski distance, Int. J. Elec. Comput. Eng. 3 (2008), 149–157.Search in Google Scholar

[18] J. M. Merigó and A. M. Gil-Lafuente, The induced generalized OWA operator, Inf. Sci. 179 (2009), 729–740.10.1016/j.ins.2008.11.013Search in Google Scholar

[19] J. M. Merigó and M. Casanovas, The fuzzy generalized OWA operator and its application in strategic decision making, Cybern. Syst. 41 (2010), 359–370.10.1080/01969722.2010.486223Search in Google Scholar

[20] J. M. Merigó and A. M. Gil-Lafuente, New decision making techniques and their application in the selection of financial products, Inf. Sci. 180 (2010), 2085–2094.10.1016/j.ins.2010.01.028Search in Google Scholar

[21] J. M. Merigó and M. Casanovas, Induced aggregation operators in the Euclidean distance and its application in financial decision making, Expert Syst. Appl. 38 (2011), 7603–7608.10.1016/j.eswa.2010.12.103Search in Google Scholar

[22] Z. Shouzhen, The intuitionistic fuzzy ordered weighted averaging-weighted average operator and its application in financial decision making, World Acad. Sci. Eng. Technol. 68 (2012), 745–751.Search in Google Scholar

[23] Z. Shouzhen, An extension of OWAD operator and its application to uncertain multiple-attribute group decision-making, Cybern. Syst. 47 (2016), 363–375.10.1080/01969722.2016.1182362Search in Google Scholar

[24] Z. Shouzhen and Y. Xiao, TOPSIS method for intuitionistic fuzzy multiple-criteria decision making and its application to investment selection, Kybernetes 45 (2016), 282–296.10.1108/K-04-2015-0093Search in Google Scholar

[25] Z. Shouzhen, S. Weihua and Z. Chonghui, Intuitionistic fuzzy generalized probabilistic ordered weighted averaging operator and its application to group decision making, Technol. Econ. Dev. Econ. 22 (2016), 177–193.10.3846/20294913.2014.984253Search in Google Scholar

[26] F. Smarandache, A Unifying Field in Logics: Neutrosophic Logic. Neutrosophy, Neutrosophic Set, Neutrosophic Probability, American Research Press, Rehoboth, NM, 1999.Search in Google Scholar

[27] F. Smarandache, Neutrosophic set – a generalization of the intuitionistic fuzzy set, Int. J. Pure Appl. Math. 24 (2005), 287–297.10.1109/GRC.2006.1635754Search in Google Scholar

[28] V. Torra and Y. Narukawa, On hesitant fuzzy sets and decision, in: 18th IEEE International Conference on Fuzzy Systems, Jeju Island, Korea, pp. 1378–1382, 2009.10.1109/FUZZY.2009.5276884Search in Google Scholar

[29] J. Vanicek, I. Vrana and S. Aly, Fuzzy aggregation and averaging for group decision making: a generalization and survey, Knowl. Based Syst. 22 (2009), 79–84.10.1016/j.knosys.2008.07.002Search in Google Scholar

[30] Y. M. Wang and T. M. Elhag, A fuzzy group decision making approach for bridge risk assessment, Comput. Ind. Eng. 53 (2007), 137–148.10.1016/j.cie.2007.04.009Search in Google Scholar

[31] G. W. Wei, Some induced geometric aggregation operators with intuitionistic fuzzy information and their application to group decision making, Appl. Soft Comput. 10 (2010), 423–431.10.1016/j.asoc.2009.08.009Search in Google Scholar

[32] G. W. Wei, Interval valued hesitant fuzzy uncertain linguistic aggregation operators in multiple attribute decision making, Int. J. Mach. Learn. Cybern. 7 (2016), 1093–1114.10.1007/s13042-015-0433-7Search in Google Scholar

[33] G. W. Wei, Picture fuzzy cross-entropy for multiple attribute decision making problems, J. Bus. Econ. Manage. 17 (2016), 491–502.10.3846/16111699.2016.1197147Search in Google Scholar

[34] G. W. Wei, F. E. Alsaadi, T. Hayat and A. Alsaedi, Picture 2-tuple linguistic aggregation operators in multiple attribute decision making, Soft Comput. 22 (2018), 989–1002.10.1007/s00500-016-2403-8Search in Google Scholar

[35] G. W. Wei, X. R. Xu and D. X. Deng, Interval-valued dual hesitant fuzzy linguistic geometric aggregation operators in multiple attribute decision making, Int. J. Knowl. Based Intell. Eng. Syst. 20 (2016), 189–196.10.3233/KES-160337Search in Google Scholar

[36] Z. S. Xu, An overview of methods for determining OWA weights, Int. J. Intell. Syst. 20 (2005), 843–865.10.1002/int.20097Search in Google Scholar

[37] Z. S. Xu, Intuitionistic fuzzy aggregation operators, IEEE Trans. Fuzzy Syst. 15 (2007), 1179–1187.10.1109/TFUZZ.2006.890678Search in Google Scholar

[38] Z. S. Xu and J. Chen, Ordered weighted distance measure, J. Syst. Sci. Syst. Eng. 17 (2008), 432–445.10.1007/s11518-008-5084-8Search in Google Scholar

[39] R. R. Yager, On ordered weighted averaging aggregation operators in multicriteria decision making, IEEE Trans. Syst. Man Cybern. B 18 (1988), 183–190.10.1109/21.87068Search in Google Scholar

[40] R. R. Yager, Families of OWA operators, Fuzzy Sets Syst. 59 (1993), 125–148.10.1016/0165-0114(93)90194-MSearch in Google Scholar

[41] R. R. Yager, Centered OWA operators, Soft Comput. 11 (2007), 631–639.10.1007/s00500-006-0125-zSearch in Google Scholar

[42] R. R. Yager, Weighted maximum entropy OWA aggregation with applications to decision making under risk, IEEE Trans. Syst. Man Cybern. A 39 (2009), 555–564.10.1109/TSMCA.2009.2014535Search in Google Scholar

[43] R. R. Yager, Norms induced from OWA operators, IEEE Trans. Fuzzy Syst. 18 (2010), 57–66.10.1109/TFUZZ.2009.2035812Search in Google Scholar

[44] R. R. Yager and D. P. Filev, Induced ordered weighted averaging operators, IEEE Trans. Syst. Man Cybern. B 29 (1999), 141–150.10.1109/3477.752789Search in Google Scholar PubMed

[45] L. A. Zadeh, Fuzzy sets, Inf. Control 8 (1965), 338–353.10.21236/AD0608981Search in Google Scholar

[46] B. Zhu and Z. Xu, Analytic hierarchy process-hesitant group decision making, Eur. J. Oper. Res. 239 (2014), 794–801.10.1016/j.ejor.2014.06.019Search in Google Scholar

Received: 2017-02-02
Published Online: 2018-03-10

©2020 Walter de Gruyter GmbH, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 Public License.

Downloaded on 14.5.2024 from https://www.degruyter.com/document/doi/10.1515/jisys-2017-0029/html
Scroll to top button