Submitted Date
Subjects
Authors
Institution
  • Electromagnetic Fields of Moving Point Sources in the Vacuum

    Subjects: Information Science and Systems Science >> Basic Disciplines of Information Science and Systems Science Subjects: Physics >> Electromagnetism, Optics, Acoustics, Heat Transfer, Classical Mechanics, and Fluid Dynamics Subjects: Astronomy >> Astrophysical processes submitted time 2024-04-11

    Abstract: The electromagnetic fields of point sources with time varying charges moving in the vacuum are derived using the Liénard-Wiechert potentials. The properties of the propagation velocities and the Doppler effect are discussed based on their far fields. The results show that the velocity of the electromagnetic waves and the velocity of the sources cannot be added like vectors; the velocity of electromagnetic waves of moving sources are anisotropic in the vacuum; the transverse Doppler shift is intrinsically included in the fields of the moving sources and is not a pure relativity effect caused by time dilation. Since the fields are rigorous solutions of the Maxwell’s equations, the findings can help us to abort the long-standing misinterpretations concerning about the classic mechanics and the classic electromagnetic theory. Although it may violate the theory of the special relativity, we show mathematically that, when the sources move faster than the light in the vacuum, the electromagnetic barriers and the electromagnetic shock waves can be clearly predicted using the exact solutions. Since they cannot be detected by observers in the region outside their shock wave zones, an intuitive and reasonable hypothesis can be made that the superluminal sources may be considered as a kind of electromagnetic blackholes.

  • Copula Entropy: Theory and Applications

    Subjects: Mathematics >> Statistics and Probability Subjects: Statistics >> Mathematical Statistics Subjects: Information Science and Systems Science >> Basic Disciplines of Information Science and Systems Science submitted time 2024-04-11

    Abstract: Statistical independence is a core concept in statistics and machine learning. Representing and measuring independence are of fundamental importance in related fields. Copula theory provides the tool for representing statistical independence, while Copula Entropy (CE) presents the tool for measuring statistical independence. This paper first introduces the theory of CE, including its definition, theorem, properties, and estimation method. The theoretical applications of CE to structure learning, association discovery, variable selection, causal discovery, system identification, time lag estimation, domain adaptation, multivariate normality test, two-sample test, and change point detection are reviewed. The relationships between the former four applications and their connection to correlation and causality are discussed. The frameworks based on CE, the kernel method, and distance correlation for measuring statistical independence and conditional independence are compared. The advantage of CE over other independence and conditional independence measures is evaluated. The applications of CE in theoretical physics, astrophysics, geophysics, theoretical chemistry, cheminformatics, materials science, hydrology, climatology, meteorology, environmental science, ecology, animal morphology, agronomy, cognitive neuroscience, motor neuroscience, computational neuroscience, psychology, system biology, bioinformatics, clinical diagnostics, geriatrics, psychiatry, public health, economics, management, sociology, pedagogy, computational linguistics, mass media, law, political science, military science, informatics, energy, food engineering, architecture, civil engineering, transportation, manufacturing, reliability, metallurgy, chemical engineering, aeronautics and astronautics, weapon, automobile, electronics, communication, high performance computing, cybersecurity, remote sensing, and finance are briefly introduced.

  • Perhaps We Have Misunderstood the Maxwell’s Theory and the Galilean Transformations

    Subjects: Information Science and Systems Science >> Basic Disciplines of Information Science and Systems Science Subjects: Physics >> Electromagnetism, Optics, Acoustics, Heat Transfer, Classical Mechanics, and Fluid Dynamics Subjects: Electronics and Communication Technology >> Optoelectronics and Laser Subjects: Physics >> Geophysics, Astronomy, and Astrophysics Subjects: Physics >> The Physics of Elementary Particles and Fields submitted time 2024-04-08

    Abstract: The Einstein’s theory of special relativity is based on his two postulates. The first is that the laws of physics are the same in all inertial reference frames. The second is that the velocity of light in the vacuum is the same in all inertial frames. The theory of special relativity is considered to be supported by a large number of experiments. This paper revisits the two postulates according to the new interpretations to the exact solutions of moving sources in the laboratory frame. The exact solutions are obtained using the classic Maxwell’s theory, which clearly show that the propagation velocity of the electromagnetic waves of moving sources in the vacuum is not isotropic; the propagation velocity of the electromagnetic waves and the moving velocity of the sources cannot be added like vectors; the transverse Doppler effect is intrinsically included in the fields of the moving sources. The electromagnetic sources are subject to the Newtonian mechanics, while the electromagnetic fields are subject to the Maxwell’s theory. We argue that since their behaviors are quite different, it is not a best choice to try to bind them together and force them to undergo the same coordinate transformations as a whole, like that in the Lorentz transformations. Furthermore, the Maxwell’s theory does not impose any limitations on the velocity of the electromagnetic waves. To assume that all objects cannot move faster than the light in the vacuum need more examinations. We have carefully checked the main experiment results that were considered as supporting the special relativity. Unfortunately, we found that the experimental results may have been misinterpreted. We here propose a Galilean-Newtonian-Maxwellian relativity, which can give the same or even better explanations to those experimental results.

  • An intelligent measure based on energy-information conversion

    Subjects: Information Science and Systems Science >> Basic Disciplines of Information Science and Systems Science Subjects: Computer Science >> Other Disciplines of Computer Science Subjects: Engineering and technical science >> Engineering Mathematics submitted time 2024-03-30

    Abstract: What is intelligence? is one of the core key questions of artificial intelligence, but there is no universally accepted definition. Based on the relationship between intelligence and life, this paper proposes that intelligence is the basic ability and characteristic attribute of living organisms, and it is the ability to achieve the maximum amount of information with the minimum energy as much as possible, and adapt to the environment and maintain existence through information processing. On this basis, this paper puts forward a new view that intelligence is the ability to convert material energy and information, further puts forward new concepts such as the measurement calculation method of intelligence, average intelligence, and comprehensive intelligence, and finally discusses the quantitative conversion relationship between matter, energy and information, points out the upper bound of intelligence and the lower bound of energy conversion into information, and further gives a dimensionless calculation formula for intelligence measurement in order to facilitate practical application. A feasible calculation method is given for the quantitative analysis of the intelligence of the intelligent system..

  • Deep-learning Review

    Subjects: Information Science and Systems Science >> Control science and technology submitted time 2024-01-13

    Abstract: As a new field with rapid development in the past ten years, deep learning has attracted more and more researchers' attention. It has obvious advantages compared with shallow model in feature extraction and modeling. Deep learning is good at mining increasingly abstract feature representations from raw input data, and these representations have good generalization ability. It overcomes some of the problems in AI that were considered difficult to solve in the past. With the significant increase in the number of training data sets and the surge in chip processing power, it has achieved remarkable results in the fields of target detection and computer vision, natural language processing, speech recognition and semantic analysis, so it also promotes the development of artificial intelligence. Deep learning is a hierarchical machine learning method that includes multilevel nonlinear transformations. Firstly, this paper discusses the basic knowledge of deep learning, analyzes the superiority of the algorithm, and introduces the mainstream learning algorithm and its application status. Finally, the existing problems and development direction are summarized.

  • Motor fault diagnosis based on deep learning

    Subjects: Information Science and Systems Science >> Control science and technology submitted time 2024-01-07

    Abstract: Traditional motor fault diagnosis technology is usually based on a single type of state parameters, such as vibration parameters or electrical parameters. However, the monitoring range of a single type of motor state parameters is very limited in many cases, which is difficult to meet the needs of comprehensive fault diagnosis of motors. The purpose of this paper is to propose a comprehensive motor fault diagnosis method by fusing vibration data and current data, so as to improve the reliability and accuracy of diagnosis. On the basis of data fusion, it is considered that in the actual industrial and production environment, the cost of obtaining large-scale labeled samples is often high or even not feasible. Therefore, the neural network is further studied and improved, and a small sample fault diagnosis network based on RNN and attention mechanism is proposed.
    In this paper, the motor fault feature extraction method is used to study the vibration and current signal characteristics of the motor under different faults. The fault feature extraction methods adopted include Fast Fourier Transform (FFT) and Hilbert-Huang Transform.
    According to the actual data fusion requirements in this paper, the overall implementation scheme of data fusion is designed. The fault features are extracted by using FFT, Hilbert-Huang Transform (HHT) and Convolutional Neural Network (CNN) and Multilayer Perceptron (MLP) in turn, and the vibration and current parameters of the motor are fused to carry out comprehensive fault identification and fault diagnosis. The results show that the motor fault diagnosis technology using data fusion method can improve the accuracy of diagnosis results and reduce the uncertainty caused by a single parameter, thus improving the accuracy of motor fault diagnosis. The designed small sample fault diagnosis network is used to identify the health status of equipment under small samples, in which the attention mechanism captures the spatial and channel relationship of signals, and a single experimental sample is used to verify that the network used in this paper has the advantages of diagnostic efficiency and accuracy under different small sample working conditions.

  • Survey of Deep Learning Applications in Industrial Fault Diagnosis

    Subjects: Information Science and Systems Science >> Basic Disciplines of Information Science and Systems Science submitted time 2024-01-06

    Abstract: In recent years, the industrial process has been developing towards complexity and large-scale, which has posed a series of challenges for traditional fault diagnosis techniques to solve practical industrial process problems. With the superior performance and unique potential of deep learning in feature extraction and pattern recognition, the application of deep learning technology to fault diagnosis has become a current research focus. Therefore, this article introduces several typical fault diagnosis methods based on deep learning. Finally, the obstacles in the application of deep learning to fault diagnosis are discussed, and the future research directions are prospected.
     

  • Deep Learning Survey

    Subjects: Information Science and Systems Science >> Control science and technology submitted time 2024-01-06

    Abstract: One of the core topics of artificial intelligence is neural networks and deep learning, which imitate the working principle of the human brain and use multi-level neural connections to mine valuable knowledge and rules from data. The research of neural networks started in the 1940s and went through several ups and downs and innovations. It now covers many types and fields, such as convolutional neural networks, recurrent neural networks, speech recognition, computer vision and natural language processing. Deep learning refers to using multi-layer neural networks to solve complex nonlinear problems. It relies on massive data and computing resources, as well as efficient training and optimization techniques. Deep learning has achieved amazing progress in recent years, but also faces some difficulties and challenges, such as model interpretability, generalization ability, security and reliability. Deep learning is still a vibrant and promising research field, which is expected to open up more opportunities and possibilities for human intelligence and life. This article will briefly introduce some types of neural network structures and some deep learning model structures.

  • Learning Animable 3D Face Model from Natural Scene Images

    Subjects: Information Science and Systems Science >> Basic Disciplines of Information Science and Systems Science submitted time 2024-01-06

    Abstract: Although the current 3D face reconstruction methods based on a single image can recover fine geometric details, these methods have limitations. The faces generated by some methods can't be really animated because they don't model how wrinkles change with expressions. Other methods are trained on high-quality facial scanning, and cannot be well extended to images of natural scenes. The method used in the report can return to the details of three-dimensional face shapes and animations, which are specific to individuals but can change with expressions. The model of this method can be trained to generate a UV displacement map from a low-dimensional potential representation composed of person-specific detail parameters and general expression parameters, while the regression quantity can be trained to predict details, shapes, expressions, postures and lighting parameters from a single image. In order to achieve this, this method introduces a new loss of detail consistency, which separates people-specific details from wrinkles that depend on expressions. This unwrapping makes it possible to synthesize realistic personal specific wrinkles by controlling expression parameters while keeping personal specific details unchanged. This method is learned from images of natural scenes, and there is no paired 3D data supervision.
     

  • Filling Missing Data in Soft Sensing based on VAE

    Subjects: Information Science and Systems Science >> Control science and technology submitted time 2024-01-05

    Abstract: In the realm of soft sensing, missing data frequently occurs during the journey from data collection to application, significantly diminishing model accuracy. This paper introduces a filling model based on the Variational Autoencoder (VAE) and GRU neural network. Validation through industrial processes confirms the accuracy of the imputed data. Experimental results demonstrate that the VAE imputation model yields an RMSE and MAE of 3.396% and 2.458% for missing rates of 10%, and 3.549% and 3.078% for missing rates of 30%, respectively. Compared to alternative imputation algorithms like PCA and SVD, the VAE model exhibits significantly enhanced performance, affirming the feasibility of this model.
     

  • Robot Supervisor: Non-invasive Algorithm Fairness Proof Mechanism

    Subjects: Information Science and Systems Science >> Information Security Technology submitted time 2023-11-23

    Abstract: In the online ride-hailing platform, the order dispatch algorithm serves as the infrastructure for providing services to match drivers and passengers, which directly affects the rights and travel experience of drivers and passengers. How to test and ensure the fairness of the order dispatch algorithm is critical to the orderly operation of the online ride-hailing platform and protecting the rights and interests of both drivers and passengers. This paper proposes a non-intrusive fairness test method for online ride-hailing order dispatch algorithms. There is no need to go into the details of the algorithm. Only the scene information and the dispatch results are needed to conduct the algorithm fairness test. This method greatly reduces the calculation cost and can protect corporate confidentiality and user privacy. It first characterizes algorithm fairness through statistical modeling of randomness. Then it uses hypothesis testing methods to verify algorithm fairness, and finally uses zero-knowledge proof technology to build contactless trust. In the study, Didi Chuxing data was used to verify the fairness of the algorithm based on pick-up distance. The experiment verified the feasibility and effectiveness of this method, and demonstrated the potential of this framework for larger-scale and more complete algorithm fairness testing.

  • Evolutionary Tinkering Enriches the Hierarchical and Interlaced Structures in Amino Acid Sequences

    Subjects: Information Science and Systems Science >> Basic Disciplines of Information Science and Systems Science Subjects: Biology >> Biological Evolution Subjects: Biology >> Biomathematics Subjects: Physics >> Interdisciplinary Physics and Related Areas of Science and Technology Subjects: Biology >> Genetics submitted time 2023-10-15

    Abstract: Background: In bioinformatics, tools like multiple sequence alignment and entropy methods probe sequence information and evolutionary relationships between species. Although powerful, they might miss crucial hierarchical relationships formed by the reuse of repetitive subsequences like duplicons and transposable elements. Such relationships are governed by “evolutionary tinkering'', as described by Fran c{c}ois Jacob. The newly developed Ladderpath theory provides a quantitative framework to describe these hierarchical relationships.

    Results: Based on this theory, we introduce two indicators: order-rate $ eta$, characterizing sequence pattern repetitions and regularities, and ladderpath-complexity $ kappa$, characterizing hierarchical richness within sequences, considering sequence length. Statistical analyses on real amino acid sequences showed: (1) Among the typical species analyzed, humans possess relatively more sequences with large $ kappa$ values. (2) Proteins with a significant proportion of intrinsically disordered regions exhibit increased $ eta$ values. (3) There are almost no super long sequences with low $ eta$. We hypothesize that this arises from varied duplication and mutation frequencies across different evolutionary stages, which in turn suggests a zigzag pattern for the evolution of protein complexity. This is supported by our simulations and examples from protein families such as Ubiquitin and NBPF.

    Conclusions: Our method emphasizes “how objects are generated'', capturing the essence of evolutionary tinkering and reuse. The findings hint at a connection between sequence orderliness and structural uncertainty, and suggest that different species or those in varied environments might adopt distinct protein elongation strategies. These insights highlight our method's value for further in-depth evolutionary biology applications.

  • Multi view Stereo Imaging Mechanism and Three-dimensional Reconstruction Method of Thermo optical Near field Spatial coherence

    Subjects: Physics >> General Physics: Statistical and Quantum Mechanics, Quantum Information, etc. Subjects: Information Science and Systems Science >> Basic Disciplines of Information Science and Systems Science submitted time 2023-10-12

    Abstract: Stereo vision is based on the principle of human binocular and insect compound eye depth vision, obtaining multiple digital images of the surrounding scene from different angles through multiple cameras, obtaining corresponding points on multiple images through stereo matching technology, and reconstructing 3D object images from the parallax information of corresponding points. The neurobiological mechanism of the existing Iterative reconstruction and stereo matching methods of binocular vision is still unknown, The neurobiological mechanism of dual view stereo vision is studied, and the Iterative reconstruction formula of dual view and multi view stereo vision is given. The conclusions obtained are consistent with the theory of visual neuroscience

  • Two families of binary linear codes and their weight distributions and weight hierarchies

    Subjects: Mathematics >> Mathematics (General) Subjects: Information Science and Systems Science >> Information Security Technology submitted time 2023-02-18

    Abstract: Constructing linear codes with few weights is an important research topic in coding and cryptography theory. The weight hierarchy of code also has basic theoretical significance in coding theory, and plays an important role in secret communications. In this paper, a class of 3-weight and a class of 4-weight binary linear codes are constructed, and their weight distributions and weight hierarchies are determined by exponential sum theory.
     

  • 一种针对AES密码芯片的相关功耗分析方法

    Subjects: Information Science and Systems Science >> Basic Disciplines of Information Science and Systems Science submitted time 2023-02-14 Cooperative journals: 《桂林电子科技大学学报》

    Abstract: Aiming at the influence of the noise and other factors in the process of classical correlation power analysis, based
    on the linear correlation between Hamming weight and power traces, a correlation power analysis method for AES cryptographic
    chip is proposed. According to the uneven distribution of the median Hamming weight of the S-box output of the
    cryptographic algorithm, a set of plaintexts with strong correlation with the power traces is obtained by filtering the correct
    keys and the wrong keys by using the discrimination ratio. In the stage of key recovery, the leakage points of the first two
    S-boxes are found by observing this set of plaintext inputs, and the leakage intervals of the remaining 14 S-boxes are found
    one by one by using the separate guessing method, so that the key information of the remaining bytes can be captured without
    traversing all power traces. The experimental analysis of AT89S52 chip shows that the proposed method can correctly
    recover the one-byte key of AES with 90% success rate by using only 9 plaintexts and corresponding power traces, and the
    computational complexity is only 4.1% of the classical correlation power analysis, which significantly improves the efficiency
    of the correlation power analysis.

  • 轻量级认证加密算法ASCON的差分功耗分析

    Subjects: Information Science and Systems Science >> Basic Disciplines of Information Science and Systems Science submitted time 2023-02-14 Cooperative journals: 《桂林电子科技大学学报》

    Abstract: Aiming at the structure of the lightweight authentication encryption algorithm ASCON, a differential power analysis)
    method is proposed. It combines the implementation characteristics of the algorithm S-box, uses the Hamming weight
    model as the power consumption discrimination function, groups the traces, and recovers the master key for encryption.
    Furthermore, for the "ghost peaks" what appear in DPA attacks, a traces preprocessing method is given. First, the traces
    are grouped according to plaintext and averaged, and then DPA attacks are launched on the preprocessed traces. The 44 bit
    master key of ASCON cipher can be recovered by attacking its sa permutation, where 1 500 traces are collected. In addition,
    the time required to directly attack the original traces is 21 849.888 9 ms, and the time required to attack the preprocessed
    traces is 198.911 3 ms. After preprocessing the traces, the time taken to attack the preprocessed traces is about 1/109 of
    that of directly attacking the original traces.

  • 基于匹配优化与距离辅助的Wi-Fi定位算法

    Subjects: Information Science and Systems Science >> Basic Disciplines of Information Science and Systems Science submitted time 2023-02-14 Cooperative journals: 《桂林电子科技大学学报》

    Abstract: Aiming at the problem that the sorting clustering positioning algorithm has low matching accuracy, and there are
    abnormal fingerprint points in the fingerprint points used for position calculation, a Wi-Fi positioning algorithm with matching
    optimization and distance assistance is proposed. According to the user's front and back position, distance and step
    length, a matching deviation detection model is designed to determine the user's abnormal position and matching deviation;
    the adjacent elements in the sorted received signal strength vector are compared with the set threshold to determine the
    change position of the sorting feature vector of the point to be located, achieve the purpose of correction by exchange, and
    obtain the corrected and merged class matching result; according to the distance between the user's position determined in the
    time period m before the positioning and the fingerprint point in the matching class, the abnormal fingerprint points used for
    position calculation are eliminated, so as to achieve more accurate indoor positioning. The simulation results show that the
    class matching accuracy and the average positioning accuracy of the proposed algorithm are improved respectively by 17%
    and 22%.

  • 基于聚类相参叠加的频率分集阵列雷达目标成像方法

    Subjects: Information Science and Systems Science >> Basic Disciplines of Information Science and Systems Science submitted time 2023-02-14 Cooperative journals: 《桂林电子科技大学学报》

    Abstract: Aiming at the problem of blurred target position and high sidelobe when the back projection algorithm (BP algorithm)
    is imaging multi-targets, after analyzing the accumulation characteristics of the FDA target echo amplitude, a target
    imaging method of frequency diversity array radar based on clustering and coherent superposition is proposed. In the analysis
    and Simulation of BP algorithm imaging process, it is found that the target point has the characteristics of energy concentration
    and energy difference with the virtual image point. The K-means clustering algorithm can make full use of these characteristics
    of the target point to extract and classify the target points in the radar imaging area, and only compensate the time
    delay of the grid points of the specific cluster after classification, and then stack the echo amplitude, Thus, the energy value
    of the time delay compensation grid points in the imaging region is obtained, and finally the multi-target clear two-dimensional
    imaging is realized. The simulation results show that the proposed method can effectively solve the problems of fuzzy
    position and high sidelobe when BP algorithm imaging multi-target, and improve the accuracy of imaging results.

  • 基于SOM 聚类平滑图信号生成的MFR工作模式识别方法

    Subjects: Information Science and Systems Science >> Basic Disciplines of Information Science and Systems Science submitted time 2023-02-14 Cooperative journals: 《桂林电子科技大学学报》

    Abstract: UAV swarms are widely used in radar signal interception due to their advantages of wide sensing range and rapid
    information sharing. Aiming at the problem that the signal samples intercepted by UAV cluster are difficult to be fused and
    analyzed directly, and the recognition accuracy of multi-function radar (MFR) working mode is low under the condition of
    few training samples and unbalanced working mode samples, an MFR working mode recognition method based on smooth
    graph signal generated by self-organizing map (SOM) clustering is proposed. Firstly, the intercepted signal samples are
    clustered by using distributed SOM algorithm to extract the similarity between samples; Then, according to the clustering
    results, the signal sample set is characterized by smooth graph signal, and the correlation of signal samples under the same
    working mode is established; Finally, the graph attention network is used to fuse and classify the graph node data of the above
    graph signals to complete the MFR working pattern recognition. The experimental results show that, when the imbalance
    of working mode samples is about 10∶1 and the number of training samples in each class is 25, the recognition accuracy
    and F1 measure of this method are improved by 22.8% and 22.34% respectively compared with the existing methods, and
    can be applied to the case of noise interference.

  • 基于频率分集探地雷达的介质参数估计方法

    Subjects: Information Science and Systems Science >> Basic Disciplines of Information Science and Systems Science submitted time 2023-02-14 Cooperative journals: 《桂林电子科技大学学报》

    Abstract: Aiming at the problem of medium parameter estimation around buried objects, a method of medium parameter estimation
    based on image entropy is proposed. Firstly, the point spread function (PSF) of frequency diversity array ground
    penetrating radar (FDA-GPR) in medium is derived, and the relationship between PSF and back projection imaging algorithm
    is analyzed. Then, the imaging results of FDA-GPR and UWB-GPR are compared and analyzed. For the same region
    containing the target to be imaged, the medium parameters have a great influence on the propagation velocity of electromagnetic
    wave. When different medium parameters are used to image the region, the imaging results are also different. The image
    entropy of the imaging results under different parameters is calculated. The smaller the image entropy of the imaging
    results is, the better the focusing degree of the imaging is, and the closer the corresponding medium parameters are to their
    true values. The experimental results show that:when the conductivity of the medium is not zero, FDA-GPR has better performance
    in target location and imaging than UWB-GPR, and when the target is a slender target, the proposed parameter
    estimation method can effectively estimate the parameters of the medium around the target