Abstract

Due to the inevitable delay phenomenon in the process of signal conversion and transmission, time delay is bound to occur between neurons. Therefore, it is necessary to introduce the concept of time delay into the membrane computing models. Spiking neural P systems (SN P systems), as an attractive type of neural-like P systems in membrane computing, are widely followed. Inspired by the phenomenon of time delay, in our work, a new variant of spiking neural P systems called delayed spiking neural P systems (DSN P systems) is proposed. Compared with normal spiking neural P systems, the proposed systems achieve time control by setting the schedule on spiking rules and forgetting rules, and the schedule is also used to realize the system delay. A schedule indicates the time difference between receiving and outputting spikes, and it also makes the system work in a certain time, which means that a rule can only be used within a specified time range. We specify that each rule is performed only in the continuous schedule, during which the neuron is locked and cannot send or receive spikes. If the neuron is not available at a given time, it will not receive or send spikes due to the lack of a schedule for this period of time. Moreover, the universality of DSN P systems in both generating and accepting modes is proved. And a universal DSN P system having 81 neurons for computing functions is also proved.

1. Introduction

As an important branch of natural computing, membrane computing enriches the theoretical framework of biomolecular computing. It is inspired by the biological structures and functions of cells and tissues and has good distributed and parallel computing capabilities. At present, there are four main types of membrane systems, including cell-like P systems [1, 2], tissue P systems [3, 4], numerical P systems [5, 6], and neural P systems [7]. Among them, spiking neural P systems that belong to neural P systems, known as the third-generation neural network model in membrane computing, are inspired by the biological phenomenon that neurons communicate with each other through spikes. In recent years, SN P systems have received great attention, and many variants of SN P systems have been discussed.

SN P systems were first proposed in 2006 [8]. An SN P system can be regarded as a directed graph in which nodes represent neurons and arcs represent synapses. It is considered that each neuron mainly contains two components: spikes and rules, where rules include spiking rules and forgetting rules . They have the different operation. Assuming that the spiking rule is used at time , it will consume spikes and produce one spike in the neuron at time , and then the produced spike will be transmitted to the adjacent neurons through the synapses. The function of the forgetting rule is to eliminate spikes. Over the years, many variations of the SN P systems have been proposed considering different biological characteristics and phenomena, such as SN P systems with astrocytes [9], SN P systems with neuron division and dissolution [10], cell-like SN P systems [11], numerical SN P systems [12], and SN P systems with polarizations [13].

Because of SN P systems with strong expansibility, in recent years, the research on SN P system mainly focuses on the theoretical aspect, especially the establishment of various computing models. Many new variants of SN P systems are proposed by changing rules, synapses, and structures. Some studies have changed in forms of rules, for instance, SN P systems with white hole neurons [14], SN P systems with request rules [15], SN P systems with inhibitory rules [16], SN P systems with communication on request [17, 18], nonlinear SN P systems [19], and SN P systems with target indications [20]. According to the inhibitory way of communication between neurons, many researches about the SN P systems with antispikes are proposed [21, 22]. There are some changes in synapses between neurons, for example, SN P systems with multiple channels [23], SN P systems with inhibitory synapses [24], SN P systems with rules on synapses [25, 26], and SN P systems with thresholds [27], as well as SN P systems with scheduled synapses [28, 29]. According to the self-organizing and adaptive characteristics of artificial neural network, SN P systems with plasticity structure were established [30, 31]. Abstracted from neural network models, some new neural P systems were proposed [32, 33]. Moreover, motivated by the characteristics of dendrites of nerve cells, dendrite P systems were investigated [34].

At present, application research on SN P systems is weak. However, in order to solve practical problems, many scholars have done researches on the application of membrane computing and obtained some research results in some aspects, for example, optimization problems [35, 36], fault diagnosis [37, 38], and image recognition [39, 40]. Most of the research on the application is to combine the optimization algorithm with the P system model and design an approximate optimization algorithm by using the P system, which is mainly used to solve the clustering problem [41, 42]. At present, there are some studies on the combination of P system and neural network, and some achievements have been made in the realization of image processing. The development of the application of membrane computing has been paid much attention. It is a new expansion and breakthrough in the field of membrane computing and develops the research on theory and application.

In the theoretical study of the P systems, their computing power and efficiency are analysed and discussed. The main purpose of computing efficiency is to analyse whether the proposed system can solve the difficult computing problems, such as the traveling salesman problem [43] and the 0-1 knapsack problem [44], within a feasible time. To prove the computing power, the P systems are compared with the Turing machine to analyse its universality [45, 46]. The theoretical research of spiking neural P systems is mainly focused on the research of computing power which can work in generating and accepting modes, and the proposed SN P system can also simulate the register machine.

Five tuples exist in a register machine that has the form of , where is the number of registers, is the set of instruction labels, is the initial label, is the final label, and is the set of three types of instructions with the following forms:(i) is the ADD instruction. When the value stored in the register is increased 1, it will be moved to one of the instructions with labels and nondeterministically.(ii) is the SUB instruction. The register is not empty and subtracts 1 to move the instruction with label . If the register is empty, the value 1 subtracted from register will be moved to the instruction with label .(iii) is the halt instruction. When the process of computation reaches the halt instruction, the computation will stop.

Generally, delays in current SN P systems indicate that the neuron will be closed before a certain time and then reopen after that to accept spikes. But in this paper, the schedule is added on the spiking and forgetting rules to explain the delay, and it is more reasonable to achieve the fact that there is a delay between the input and output. Therefore, delayed SN P systems (DSN P systems) are proposed. The rules are only allowed to run for a specific period of time, which represents the time delay between input and output of spikes. The activation of a reference neuron, that is, the application of a rule, indicates when a rule in a connected neuron takes effect or becomes available. For example, the schedule means that the rule in the neuron is only available beginning at time and before time . In other words, this rule is activated after time , producing and sending spikes before time .

This work mainly proposes DSN P systems that are a new variant of SN P systems and presents the fact that there is a time difference between input and output in the form of a time interval, called schedule. The schedule is set on spiking rules and forgetting rules to determine the performance of spikes in the neuron. The performance of the system is continuous in total time, and once the rule satisfies the activation condition of the neuron, the neuron will be locked by this rule; that is to say, the rule must be used and can only be used in the specified period of time. It is notable that the classic SN P systems that apply rules are based on the principle of maximal parallelism, but DSN P systems have changed it so that the rules are executed in order according to the schedules. In addition, the computing power of DSN P systems under the mode of generating and accepting is proved. In this way, DSN P systems are Turing universal as the number generating devices and accepting devices.

The structure of the rest of this paper is as follows: Section 2 proposes the definition of DSN P systems and gives an example to illustrate the operation mode of the DSN P system. And Section 3 proves the Turing universality of DSN P systems by proving the computing power of the system in the generating mode and the accepting mode, respectively. Then, a small universal DSN P system for computing functions is proved in Section 4. Finally, Section 5 summarizes the work we have done and explains the following research directions.

2. Delayed Spiking Neural P Systems

In this section, a new variant of spiking neural P system is proposed, called delayed spiking neural P systems (DSN P systems). Firstly, the definition of the system is given, and detailed explanations of the system are also given. And then, an example is given to show how the system works.

2.1. Definition

Definition 1. A DSN P system of degree is defined aswhere(1) is the alphabet and indicated the spike.(2) are neurons with the form , , and is the set of rules as the following two forms:(i)Spiking rules are all in neurons with the form , where is a regular expression, , and .(ii)Forgetting rules are all in neurons with the form , where is a regular expression, , and .(3) is the set of reference neurons.(4) represents the synapses, for any , , and .(5) and are the input neuron and output neuron, respectively.How the rules work is explained here. Spiking rules can be executed on the condition that the neuron contains at least spikes and the running time must conform to the schedule. is a regular expression over . Each time the rule is applied, spikes are consumed to produce one spike, which is transmitted to the connected neurons. Assume that contains spikes and a a spiking rule. The rule can be applied if and only if b≥α and the performing time conforms to the schedule of this rule. When the number of spikes in the neuron is less than , the rule will not be used.
The schedule represents the neuron output spikes from association neurons before time . is the time when the neuron receives spikes, and indicates the performing time of the rule. If , the neuron will receive spikes at time , and the specific rule in neuron will be performed at time . If , the spiking rules can be used shorthand and it means the neuron receives spikes while the rule is executed, no longer stagnant. In other words, there is no delay during receiving spikes and the executing of the rule. The schedule makes the rules run in a certain order. And if there is no delay between receiving and transmitting spikes, when spikes are transmitted out of the neuron during the synchronization of the rule execution, spiking rules are written as and forgetting rules can be written as .
Note that the performing time of this system is continuous. Continuity refers to the continuous schedule of each time of the system, and the rule can only be used if it is continuous with the schedule of the superior connected neuron. For instance, the system has three neurons with three rules, each rule has a schedule; that is, . Therefore, we consider that the system is continuous in , and if the neuron fires in ; then, its connected neurons fire only when their rules have the schedule . In our figures, we indicate a reference neuron with the symbol “•”, denoted as . The reference neuron is activated at time and outputs the spike before time which indicates its rule with schedule , the connected neuron is only available beginning at time . If the neuron applies a rule and produces a spike but no rule in the adjacent neuron is available at the time of outputting, then no neuron can receive this spike.
Forgetting rules represent that the rules can be used only if when there are spikes in neuron and the rules also be allowed to use at time . The function of forgetting rules is to consume spikes in neurons. The schedule for forgetting rules has the same meaning as spiking rules. Each neuron may contain more than one forgetting rule or none. The neuron contains at least one spiking rule or forgetting rule, and spiking rules and forgetting rules can both exist in the neuron. And only one rule can be used in each neuron at each time. All neurons in the spiking neural P system work in parallel, but the rules in each neuron are applied sequentially. In other words, if there are at least two rules in the neuron, then according to the time control over the rule, only one rule can be selected which meets the schedule.
The configuration of system at a given moment is composed of the number of spikes contained in each neuron. At the initial moment, every neuron in system is in an open state without any rules, denoted as the initial configuration . When each neuron in system has no rules which can be used and keep open, the system reaches the halt configuration. System evolves from one configuration to another configuration by executing the rules in each neuron. Such a process is called the transfer from to , denoted as . A calculation of system consists of a series of transitions from the initial configuration and the computation will stop if and only if it can reach the halt configuration.
In the generating mode, the spiking neural P system contains at least one output neuron of which the function is to send the generated spikes to the environment. There are several definitions of the calculation result of spiking neural P systems [47, 48]. In this paper, the calculation result is the difference between the first two times that neuron fires and sends spikes to the environment. For instance, is the time of firing the neuron for the first time, and is the time of firing the neuron for the second time, so the calculation result is . Because the system in generating mode is deterministic, it produces a collection of numbers, denoted as , where the subscript 2 represents the schedule of the first two spikes. When the spiking neural P system is in the accepting mode, there are input neurons in the system, whose function is to receive spikes from the environment. When the number is identified by this system, it is encoded into a spike train such as (the spike train is a binary language defined on , where 1 represents one spike and 0 represents no spike). And then, the system reads the spike train through the input neurons. The set of all numbers accepted by the system is denoted as .
The set of all the numbers generated or accepted by the DSNP system in generating and accepting modes is denoted as , where . The family of all sets denotes , and it means the DSN P system having no more than rules and no more than neurons under the generating or accepting mode. If the values of and cannot be calculated, it is denoted by .

2.2. Illustrative Example

An example which is a DSN P system with five neurons is given in Figure 1. And the number of spikes in the neuron at each time is given in Table 1. In this illustrative example, five neurons are used to explain the way of performance by using spiking rules and forgetting rules. Assume that neuron has one spike at time ; it will produce one and send it to neurons and , respectively. The neuron has the form of rule as , and it means that this rule allows neuron to receive one spike that came from neuron at time . In this way, at time , and both have one spike.

Before time , sends a spike to neurons and , respectively. And then neurons and each receive one spike at time . At the same time, the neuron performs the rule and receives one spike from which means that the rule will be used. Meanwhile, the rule in neuron αg4 can be applied. Therefore, two spikes in neuron can activate the rule at time , one from neuron and another one from neuron . And then one spike will be sent to other connected neurons before time .

3. Computing Power as the Number Generator and Acceptor

This section proves the computing power of systems as generating devices and accepting devices and shows that DSN P systems are Turing universal. In order to prove the computing power of a DSN P system, a register machine is considered. Register in corresponds to neuron in system . All of the instructions also correspond to neurons in , such that an ADD instruction corresponds to neuron . And if register stores number , spikes will be stored in neuron .

There are two working modes in register machine, the generating mode, and the accepting mode. The system in generating mode has three modules: module ADD, module SUB, and module OUTPUT. And in accepting mode, the system has three types of modules: module ADD, module SUB, and module INPUT. The set of numbers generated by the register machine is denoted as . In both modes, a Turing computable set of natural numbers can be generated; that is, . Here, refers to the set of numbers calculated by the Turing machine.

3.1. The DSN P System as Generator

Theorem 1. .

Proof. In generating mode, all registers are empty in the initial state, starting with instruction . The number stored in register 1 at the end of the calculation is the number generated by the register. Assume all neurons are empty, except neuron which has one spike. The module ADD is used to simulate ADD instruction , shown in Figure 2. The reference neuron in this module is neuron , denoted as . Assume the rule in neuron is activated at time , and it will send one spike to neurons and , respectively. At time , neurons and receive this spike. Meanwhile, the rule in is nondeterministic chosen one to perform, so there are two cases:(1)At time , if the rule is used, the neuron will accept one spike from neuron at time , so that the rule in neuron can be activated. At the same time, neuron receives one from neuron at time , and the rule in neuron is applied. At time , both and receive two spikes from neuron , so the rule can be used. Therefore, the neuron is empty, and neuron has two spikes.(2)At time , if the rule is applied, then the neuron is empty at time , and the rule in neuron is enabled. Each of neurons and receives one spike from neuron at time . So the rule in neuron can perform, and then, at time , neurons and receive one spike, respectively. In this way, there are two spikes in the neuron corresponding to increasing the register by 1.So far, the module ADD has been proved to be able to simulate correctly. The following is t the simulating proof of module SUB. The structure and rules of module SUB are shown in Figure 3, which is used to simulate SUB instruction . The simulation of module SUB also starts at reference neuron . The neuron activates At time and sends one spike to neurons and , respectively, before time . It means that neurons and receive this spike at time . Depending on whether the neuron has spikes at the initial time, there are two cases as follows:(1)If the neuron is not empty, it has spikes. So, neuron has spikes (odd number) at time , and the rule can be used. At time , both neurons and receive one spike from the neuron , and the rules in neurons and can be used. At the same time, also receives one spike from ; that is, the rule in neuron can be used. At time , neurons and get one spike from neuron , such that rules in and are performed. At the next time, each of neurons and receives one spike from , and rule in and rule in are applied. In this way, neuron is empty without receiving any spikes and gains one spike.(2)There is no spike in neuron . At time , rules allow neurons and to obtain one spike from neuron , and this rule is activated at time . And then, each of neurons and obtains two spikes which come from and , so the rule in and the rule in can fire. Therefore, receives one spike from , and neuron is empty after time .In particular, there is no interference between SUB modules, even if there are multiple SUB instructions with a common register. An illustrative example is given in Figure 4. Assume two SUB instructions , and are simulated. When neuron fires, neurons and get one spike at time or time . But only rules and in neurons and are enabled. Neurons and are empty. Therefore, the SUB instruction can be simulated correctly by module SUB.
The structure of module OUTPUT, shown in Figure 5, is used to halt the computation. At the initial state, neuron has one spike. Assume that the neuron has spikes according to the number storing in register 1 of and neuron receives one spike at time . So the neuron gets one spike at time . Thus, the neuron has spikes, which is odd number, and at time , the rule is activated and sends a spike to neurons and , respectively. In this way, has two spikes, which is even number. The rule in can be applied the next time and sends one spike to environment. At the same time, neuron sends one spike to neuron and neuron fires again. At present, there are three spikes in neuron . No rules in can be enabled since the number of spikes is odd.
After times, that is, at time , neuron has only one spike left, so that the rule in can no longer be used. At this moment, neuron has one spike. Therefore, at time , is activated and sends a spike to . The number of spikes in is which is an even number, so the rule can be applied and sends a spike to environment a second time. The number computed by the DSN P system is the difference between the first two times which send spikes to environment; that is, . The results of each time are shown in Table 2.
These three types of modules simulate the corresponding instructions correctly; that is, system correctly simulates the register machine . Therefore, the conclusion of Theorem 1 is proved.

3.2. The DSN P System as Acceptor

Theorem 2. .

Proof. In accepting mode, all registers except register 1 are empty in the initial state. As the proof of Theorem 1, the proof of Theorem 2 also needs to simulate register machine M′, which is deterministic register machine. Similarly, register corresponds to neuron , and if the number is stored in register , then there are 2n spikes in neuron . If the register machine reaches the halting instruction , when the calculation stops, the number stored in register 1 is accepted by the register machine. And the proof of module SUB is already proved in Theorem 1. Therefore, in the proof of Theorem 2, we only need to prove whether the modules ADD′ and INPUT can be simulated correctly.
At the initial configuration of , all neurons are empty, except for reference neuron which contains one spike. Module ADD’ shown in Figure 6 differs from model ADD shown in Figure 2. Module ADD′ simulates the ADD′ instruction . Assume that neuron has one spike and activates at time . Thus, each of neurons and receives one spike from at time . And the rule in neuron can be applied. At time , both neurons and receive a spike, and at this time, neuron has two spikes. The simulation of module ADD’ is complete, since there are two spikes in and the content of register increases one.
Module INPUT, shown in Figure 7, is used to compute number introduced by neuron into . The module INPUT computes number between the time of receiving two spikes and the function of this module is to read the spike train , where . Assume the neuron receives a spike at time . Each of neurons , , and receives one spike at time . At the next time, rules in neurons and are enabled; thus, one spike is received by both neurons and which comes from the neuron . And at the same time, each of neurons and receives a spike from neuron . Therefore, neuron has two spikes at that moment. Notably, rules in neurons and do not stop until received the second spike. After times, the neuron gets spikes, that is, the number is stored in register 1. And neuron has two spikes at this moment. Thus, the rule in can be used to produce one spike and send it to neuron . The results of this model of how to work at each time are presented in Table 3.
From the above proof, module ADD’, module SUB, and module INPUT can be correctly simulated. In this way, the proof of Theorem 2 is complete.

4. A Small Universal DSN P System as Computing Functions

This section mainly proves the small universality of the DSN P system for computing functions. First, we need some simple theoretical explanations. Turing computable functions can be computed by register machine . Here, the way of how a register machine calculates a function is described. Register machine stores parameters in the first registers, and the other registers are empty. We start the calculation with instruction and end with instruction . When the computation stops, the result is stored in one of the specified registers, with the rest of the registers empty. It is notable that the register machine is deterministic when in computing mode. In this paper, we prove the universality of the system as computing functions by simulating a universal register machine .

When proving the small universality of the DSN P system as a computing function device, it is usually considered to use a small universal register machine , more details explained in [49]. contains 8 registers and 23 instructions, as shown in Figure 8. And the key to proving the universality of the system is to find a specific number of neurons that can simulate the register machine and minimize the number through the combination of some ADD and SUB instructions. However, since the SUB instruction is related to register 0 which is used to store the calculated results, it is not reasonable to simulate . To solve this problem, a new register 8 is added as the output register, whose function is to save the calculated results. Therefore, register machine is extended to and replaces halting instruction with the following three new instructions:

In this way, register machine contains 9 registers and 25 instructions (14 SUB instructions, 10 ADD instructions, and 1 halting instruction).

Theorem 3. There is a universal DSN P system with 81 neurons for computing functions.

The specific working process of the DSN P system is shown in Figure 9, including five types of modules: INPUT module, ADD modules, SUB modules, OUTPUT module, and combination modules whose function is to simulate the combination of SUB instructions and ADD instructions. First, the module INPUT reads the encoded parameters of function into the system. And then, the process enters the register machine simulator. In addition, It is important to note that ADD instructions are used in module ADD′ as shown in Figure 6 when simulating register machine , and module SUB is shown in Figure 3. The final process is the module OUTPUT, which is structured as shown in Figure 5. But the output register is register 8 instead of register 1.

In addition, the construction of module INPUT used in the proof of this theorem is different from the previous in accepting devices. Assume that is the fixed parameters of a function, and for any , register machine satisfies . In register machine , register 1 stores parameter , and register 2 stores parameter . Therefore, the module INPUT is constructed which is shown in Figure 10. In the beginning, the two parameters and are encoded into the form of a spike train . Then, through the computing of module INPUT, the two parameters and are stored in neurons and replaced by spikes and spikes, respectively.

Module INPUT shown in Figure 10 is used to compute the function that is introduced by neuron into with spikes and with spikes. Assume that the neuron activates at time . Each of neurons , , , , and receives one at time . Thus, neurons and activate, and then neuron sends one spike to and , respectively. Meanwhile, neuron sends one spike to and , so that neuron receives two spikes at time , and each of neurons and has one spike. Until the neuron receives the second spike, that is, after times, rules in and cannot fire. In this way, neuron gets spikes corresponding to the number stored in register 1.

Simultaneously, each of neurons , , and has two spikes when neuron gets spikes. Thus, the rules in and are enabled, both consuming one spike. And at the next time, neuron receives two spikes, one from and the other from . At the same time, neurons and each receive a spike from the other, so the number of spikes they contain returns to an even number. In this way, neurons and remain active for the next times. Therefore, neuron accepts spikes altogether until neuron receives the third spike. And then, neuron has three spike, and the rule can perform and sends one spike to .

Therefore, the system needs 94 neurons to simulate the universal register machine including(i)6 neurons for module INPUT(ii) neurons for 10 ADD instructions(iii) neurons for 14 SUB instructions(iv)25 neurons for 25 labels of instructions(v)9 neurons for 9 register(vi)2 neurons for module OUTPUT

The module SUB-ADD-1 is simulating the sequence of SUB instructions and ADD instructions , as shown in Figure 11, where . The simulation of module SUB-ADD-1 also starts at reference neuron . When neuron has one spike, ADD instruction is triggered and performed immediately. Thus, the final results should be that neuron gains one spike, the register increases by one, and is empty. If neuron has one spike, the ADD instruction will not activate. The register and neuron are certainly empty. The computation process is divided into two cases depending on the number of spikes in neuron . The process is similar to module SUB (shown in Figure 3). Assume that the reference neuron fires at time .(1)If the register is nonempty, neuron has 2n spikes. At time , neuron has spikes, and the rule fires. At the same time, neuron also gains one spike. Thus, at the next time, each of neurons , , and gains one spike, and the rule in neuron and in can be used. At time , neurons and gain one spike form , and rule can be applied. And at the next time, neurons and get one spike form at again, thus, the rule in and in fire. At time , applies its rule and sends one spike to before time . In this way, has two spikes, and neuron is empty.(2)If the register is empty, it means that neuron is empty at the initial time. At time , neurons and have one spike, and the rule in can be applied which means neuron accepts the spike at this moment. At time , the rule in fires and sends one spike to and before , respectively. At time , and receive two spikes, and neuron chooses to perform and neuron performs rule . Therefore, gains the spike at time , and and are empty.

In this way, consecutive SUB-ADD instructions can be simulated correctly, and neuron can be saved. Then, there are 6 pairs of instructions corresponding to module SUB-ADD-1, which can save 6 neurons associated with labels , , , , , and .

Module SUB-ADD-2 is used to simulate the combination of SUB instruction and ADD instruction . As shown in Figure 12, the computational process of SUB-ADD-2 is similar to that of module SUB-ADD-1. Thus, one neuron associated with can be saved.

Module ADD-SUB is shown in Figure 13. And the function of this module is to simulate consecutive ADD-SUB instructions and , where . Assume that reference neuron has one spike at time , and the produced spike is received by and at the next time. Then, the working process of next time is the same as module SUB, divided into two cases. receives one spike from at time or time , so that has two spikes. Thus, both the ADD and SUB instructions can be implemented correctly by this module. So, neuron can be saved, and neuron also is saved as compared to module ADD’ shown in Figure 6. There are two pairs of instructions that adapt to the module ADD-SUB. Thus, four neurons are saved.

ADD-ADD instructions and can also be combined by module ADD-ADD (shown in Figure 14). Assume that, at time , neuron activates. Each of neurons , , and receives one spike at time . Then, neurons and get their second spike from , at time . Thus, the consecutive ADD-ADD instructions can be simulated correctly by module ADD-ADD. Two module ADD can share the common neuron , and the neuron associated with label is also saved. Therefore, two neurons can be saved by module ADD-ADD.

From the above proof, six neurons are saved by module SUB-ADD-1. One neuron is saved by module SUB-ADD-2. Four neurons are saved by module ADD-SUB. Two neurons are saved by module ADD-ADD. A total of 13 neurons were saved. Therefore, the system can only use 81 neurons to simulate the register machine .

5. Conclusion

In this paper, a new variant of SN P systems, called DSN P systems, is proposed, which are inspired by the biological reality that there is a certain time delay between the input and output of neurons. Compared with the classic SN P systems, we made corresponding improvements in the spiking rules and forgetting rules by adding schedules to limit the performing time of rules. It achieves a time delay between input and output by scheduled rules and makes the performance of the system continuous in total working time. The rules can only be used within a specified time range, and during this time, the neuron is locked and cannot send or receive spikes. This also allows a neuron with multiple rules to selectively apply the rules according to the schedule when it is activated. In addition, DSN P systems have changed the principle (maximal parallelism) of applying rules, and rules are executed in order according to the schedules. And in Theorems 1 and 2, the computational power of DSN P systems are proved by simulating the register machine under the generating mode and the accepting mode. In Theorem 3, a universal DSN P system having 81 neurons for computing functions is also proved.

In this paper, we only involve the theoretical proof of DSN P systems. In the future work, in both theory and application, there are still many problems that can be improved based on DSN P systems. In theory, we can further improve DSN P systems. We can demonstrate the computational power of systems that work on other modes, such as sequential mode and parallel mode. We can also investigate whether the computational power remains if the types of rules are reduced. And DSN P systems with other features are to be studied, such as multiple channels and polarizations. In terms of application, on the one hand, we can consider whether the DSN P system can be combined with a neural network and applied to image or text processing, as the work in [50]. On the other hand, DSN P systems can also be combined with intelligent algorithm, and the performance of such intelligent optimization systems may be greatly improved. In addition, there are many other directions of application, such as diagnose fault and hardware. Although the application of SN P systems has not been fully studied, as a third-generation neural network, the development of SN P systems has infinite possibilities, which need our continuous exploration.

Data Availability

This manuscript does not use any datasets.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was funded by the National Natural Science Foundation of China (nos. 61876101, 61802234, and 61806114), the Social Science Fund Project of Shandong (16BGLJ06 and 11CGLJ22), China Postdoctoral Science Foundation Funded Project (2017M612339 and 2018M642695), Natural Science Foundation of the Shandong Provincial (ZR2019QF007), China Postdoctoral Special Funding Project (2019T120607), and Youth Fund for Humanities and Social Sciences, Ministry of Education (19YJCZH244).