摘要: Modern mathematical neural networks are derived from biological neural networks, yet the currently popular general large models do not incorporate biological neural networks. The primary reason for this is that the differential equations based on biological neural networks are difficult to manipulate. At present, mathematical neural networks are characterized by their capacity for large-scale deployment, while biological neural networks offer strong biological interpretability. This paper introduces a system of differential equations with perfect symmetry and convenient manipulability, enabling us to manipulate this system as easily as we manipulate numbers in a matrix, thus integrating the advantages of both. As we are introducing a brand-new neural network framework, we first explore the mathematical properties of the differential equations, then define a new signal propagation method, and finally propose a new training approach for the neural network. The training of this new neural network does not rely on the traditional back-propagation algorithm; instead, it depends solely on the propagation of local signals. This implies that we no longer require global information to train the network. Each neuron can adjust based on the signals it receives and its predetermined strategy. As a verification, we mimicked the linking method of a multilayer perceptron (MLP) to create a new neural network and trained it on the MNIST dataset, demonstrating the effectiveness of our methodology.