[180] viXra:2406.0190 [pdf] submitted on 2024-06-30 22:18:16
Authors: Taha Sochi
Comments: 178 Pages.
This is the second volume of my book "Notes and Problems in Number Theory".
Category: Number Theory
[179] viXra:2406.0189 [pdf] submitted on 2024-06-30 05:49:13
Authors: Mohd. Javed Khilji
Comments: 17 Pages.
The 2022 study’s experimental investigations prove that relative velocities from Einstein's first postulate significantly violate kinetic energy conservation, whereas complex relative velocities show zero error. This paper reveals a hidden variable creating contrasting realms, real and imaginary, similar to rest and motion, allowing seamless transition in the complex domain through an optical process. It also establishes inertial frame criteria based on Newton's first law. The traditional setup of velocities v and -v, summing to 2v in magnitude but zero as a vector, fails to meet inertial frame criteria, which require the sum of magnitudes' absolute values to equal their vector sum, only achieved when frames are at rest or follow Newton's first law. Consequently, this setup cannot support a seamless transition between electric and magnetic fields or account for z-axis phenomena. The author introduces a new setup involving v (motion) and iv (rest), with previous works (2011, 2017, 2022) defining complex relative motion as a combination of real and imaginary motions. The Modified Transformation Laws of Coordinates (2017), later included as a book chapter (2022), now known as jk Transformation Laws, show vectors with symmetry while scalars with asymmetry. This paper explores variation in mass, time, and length at varying velocity via complex transformations. A 2004 study shows decrease results from increase, demonstrating antimatter's emergence and transforming infinity into energetic photons at c, providing insights into gamma rays and GRBs. Stationary lengths contract and moving lengths elongate, validated by a Russian Physicist V. N. Streltsov in 1974. Our analysis of Persistence of vision is empirical justification by a burning incense stick rotating at 16 rounds per second, appearing as a red circle. Fast muons travel extra distances, and jet exhausts appear as straight lines. moving photons appear in ray. Moving clocks run faster, resting time stretches. Unlike, time dilation, lightning fades instantly while thunder lingers, supporting the paper's conclusions. Waves within rays are preserved by flexible acceleration, The inverse results, similar to those of qubits, predict entangled particles and their resolution when opposing states coexist with interconnectedness. The unique outcomes without reciprocity revolutionize physics.
Category: Relativity and Cosmology
[178] viXra:2406.0188 [pdf] replaced on 2024-07-01 15:43:31
Authors: Michalis Psimopoulos
Comments: 38 Pages.
We consider the black-body cavity as a closed system consisting of a fixed total number s of quanta that in turn form a random total number N of photons. The states describing this photon gas are equiprobable according to Bose statistics and their number is equal to the number of partitions of the integer s. Using the Hardy-Ramanujan formula for large s, Planck's distribution is derived without resorting to Boltzmann's law and to interactions between radiation and matter.
Category: Quantum Physics
[177] viXra:2406.0187 [pdf] submitted on 2024-06-30 16:49:47
Authors: Taha Sochi
Comments: 237 Pages.
This book is the first volume of a collection of notes and solved problems about number theory. Like my previous books, maximum clarity was one of the main objectives and criteria in determining the style of writing, presenting and structuring the book as well as selecting its contents.
Category: Number Theory
[176] viXra:2406.0186 [pdf] replaced on 2024-10-18 20:05:33
Authors: Sergey Y. Kotkovsky
Comments: 34 Pages. In English; article restructured; Dirac equation corrected; some additions made.
Based on the algebra of biquaternions in isotropic basis, we have built a model of the DNA genetic code that describes nucleotides, doublets and triplets. Each nucleotide in this model is represented by its own biquaternion. Together, these four nucleotide biquaternions form the basis of the entire biquaternion space. The model justifies the grouping of triplets which are encoding the same amino acids. It is possible to trace direct correspondences between the algebraic structures of our model and the spin wave functions studied in quantum relativistic field theory. This suggests a special quantum-like nature of the structures of the genetic code. A new biquaternion representation of the Dirac equation is obtained, the establishment of connections with which allows one to see the chiral states in the DNA structure. The mathematical nature that characterizes the genetic code specifies a particular skew-symmetric type of noise immunity, which is based on the operation of parallel complementary channels of code implementation.
Category: Physics of Biology
[175] viXra:2406.0185 [pdf] submitted on 2024-06-29 07:54:33
Authors: Volodymyr Kaplunenko, Mykola Kosinov
Comments: 32 Pages.
The review article shows the evolutionary path that turnover frequency (TOF) and turnover number (TON) have passed from "Boreskov`s Rule" to their modern definitions. From the equation catalysis rate, the second method for calculating TOF is obtained using the characteristics of catalyst material. Was prooved the possibility of obtaining TOF in two ways - using the characteristics of catalysis process and using the characteristics of the catalyst and reagents. The equivalence of two methods of TOF calculation is proved. It turned out that TOF is not a complete and unambiguous characteristic of the catalyst, as it was usually believed. TOF is only partially dependent on the characteristics of the catalyst material. It turned out that TOF is not a characteristic of a catalyst, but of a "catalyst + reagents" system, and its value directly depends on the state of their oxidation. It is proposed to use the list of oxidation states of chemical elements as the main tool in the selection of catalysts. The Sabatier principle limits the TOF and TON values by limiting the multielectron transitions when the oxidation state of the active sites of the catalyst changes. An explanation for the effect of overcoming the Sabatier prohibition is given, in which external synchronous action on the catalyst makes it possible to achieve a catalytic reaction speed higher than the Sabatier maximum.
Category: Chemistry
[174] viXra:2406.0183 [pdf] submitted on 2024-06-29 12:37:38
Authors: Mykola Kosinov
Comments: 2 Pages.
In this short communication a new formula is given which shows that the Universe has its own Quantum of action as an analog of Planck's constant. The value of the Quantum of action of the Universe is obtained with an accuracy close to that of the Newtonian constant of gravitation G. The Quantum of action of the Universe is derived from new cosmological equations obtained from the coincidence of large numbers on the previously unknown scales 10^140 , 10^160 and 10^180 .
Category: Relativity and Cosmology
[173] viXra:2406.0182 [pdf] submitted on 2024-06-29 21:19:10
Authors: Mykola Kosinov
Comments: 13 Pages.
This article proposes an unusual mechanism of muon structurogenesis in which the particle is formed with the involvement of antimatter. When positrons (antimatter) and electrons (matter) combine, they create particles more complex than positronium. Despite its apparent paradoxical nature, this mechanism has allowed for the discovery of the law of muon structurogenesis. Fundamental muon constants have been obtained from the law of muon structurogenesis. These muon constants have not been obtainable within the framework of the standard model. The muon structurogenesis mechanism predicts the existence of numerous new particles that have not yet been detected. The muon structurogenesis mechanism also predicts the mass spectrum of elementary particles. The proposed structurogenesis mechanism is a general mechanism for all elementary particles, from positronium to the proton. It is a universal mechanism of synthesis in nature. The fallacy of the concept of matter predominance over antimatter in the modern Universe is demonstrated. From the law of muon structurogenesis, it follows that the violation of lepton number conservation is not related to the symmetry or asymmetry of matter and antimatter in the modern Universe. The non-conservation of lepton number and baryon number occurs even under complete symmetry between matter and antimatter.
Category: High Energy Particle Physics
[172] viXra:2406.0181 [pdf] submitted on 2024-06-29 21:20:52
Authors: Volodymyr Kaplunenko, Mykola Kosinov
Comments: 16 Pages.
The mechanism of heterogeneous catalysis taking into account the influence of temperature is briefly considered in the development of the concept "electron as a catalyst". Here the catalytic cycle includes the heat transfer and electron generation besides the mass transfer. The mechanism of temperature influence in heterogeneous catalysis is realised through the generation of electrons in a positive feedback loop. This mechanism involves the Edison and Seebeck thermoelectronic effects. The catalytic cycle of heterogeneous catalysis is supplemented with a thermoelectronic stage. The thermoelectronic stage of catalysis involves heat transfer and electron generation. Energy transfer to the active centre of the catalyst is an integral part of the catalytic cycle. Energy transfer is considered as a positive temperature feedback loop. The generation of electrons in the positive feedback loop and their transfer to the reactants leads to an increase in reactivity of the reactants. The positive temperature feedback loop leads to an exponential (sigmoidal) dependence of the reaction rate. Keywords: "electron as a catalyst" concept, thermoelectronic stage of catalysis, positive feedback loop, donor-acceptor mechanism of catalysis, Seebeck effect, Edison effect.
Category: Chemistry
[171] viXra:2406.0180 [pdf] submitted on 2024-06-29 21:20:00
Authors: Volodymyr Kaplunenko, Mykola Kosinov
Comments: 26 Pages.
The discovery of electron and electric field catalysis led to the need to clarify and change the basic postulates of catalysis. The emergence of the "electron as a catalyst" concept revealed the contradictions of the current catalysis paradigm. The concept made it possible to formulate a new paradigm of catalysis. The article focuses on the most important aspects of the new paradigm of catalysis. The features of the general mechanism of catalytic reactions are considered. The origin of the laws of catalysis from the mechanisms of catalysis has been considered. In the model of the unified mechanism of catalysis, the stages of fundamental interaction and the stage of chemical interaction of the participants are distinguished. At the stage of fundamental interaction, the redox cycle of catalysis is realized. At this stage, there is an increase in the reactivity of the reagents. From the mechanisms of catalysis, the laws of catalysis as a function of the substance characteristics of the participants of the catalytic reaction are derived. Key words: redox cycle, the concept of "electron as a catalyst", the concept of two fundamental catalysts, the concept of the oxidation state, the relay donor-acceptor mechanism of catalysis, the laws of catalysis, the universal law of catalysis, the new paradigm of catalysis.
Category: Chemistry
[170] viXra:2406.0179 [pdf] submitted on 2024-06-29 21:21:32
Authors: Volodymyr Kaplunenko, Mykola Kosinov
Comments: 12 Pages.
The proton donor-acceptor mechanism of catalysis makes it possible to study catalytic reactions at the elementary particle level, which made it possible to obtain the laws of homogeneous catalysis. The main characteristics of homogeneous catalysis were obtained from the rate law. The main parameter in the formulas of the laws of homogeneous catalysis is the total electric charge obtained by the reactants. It was shown that in such dissimilar and different types of catalysis as homogeneous and field catalysis, the same mechanism of catalysis is realized. This mechanism is based on the transfer of electric charges to the reagents by means of protons and electrons. The proton and electron donor-acceptor mechanisms are charge-symmetric mechanisms of catalysis. The difference between the proton donor-acceptor mechanism of homogeneous catalysis and the electronic donor-acceptor mechanism of field catalysis lies in the electrical charge carriers. The donor-acceptor mechanism claims to be a universal mechanism of catalysis. The key factor leading to a decrease in the activation energy of a chemical reaction is the change in the charge state of the reactants. A generalized law is obtained, from which the laws of homogeneous and field catalysis follow as particular results.
Category: Chemistry
[169] viXra:2406.0178 [pdf] submitted on 2024-06-29 21:25:43
Authors: Volodymyr Kaplunenko, Mykola Kosinov
Comments: 22 Pages.
Based on the generalization of experimental and theoretical studies in the field of catalysis, three basic laws of heterogeneous catalysis were discovered. From the formula of the catalysis rate law, the most important characteristics of catalysis are obtained - the reaction output, TOF and TON. Formulas for calculating the characteristics of catalysis using the characteristics of catalyst substance are given. A new concept of heterogeneous catalysis has been developed, in which the role of catalysts in the mechanism of accelerating chemical reactions has been revised. The oxidation states of the reactants and active sites of the catalyst are used as parameters in the formulas of the laws of catalysis. It follows from the laws of catalysis that oxidation states are such important characteristics of catalyst substance and reagents, that they directly affect the mechanism of catalysis itself and set the values of the most important characteristics of catalysis. As the main tool in the selection of catalysts, it is necessary to use the list of oxidation states of chemical elements known in chemistry. Based on the laws of catalysis, new definitions of catalyst and catalysis are given. The class of catalysts is expanded. Material catalysts are complemented by field catalysts. The substantiation of catalysis as a fundamental direction in science is given.
Category: Chemistry
[168] viXra:2406.0177 [pdf] submitted on 2024-06-29 21:23:18
Authors: Volodymyr Kaplunenko, Mykola Kosinov
Comments: 17 Pages.
The article explores a new type of catalysis - electric field catalysis. The laws of field catalysis are given. The characteristics of the electric field are determined, which set the values of the characteristics of the field catalysis. Field catalysis and field catalyst do not fit into the traditional definition of catalysis and catalyst, which may require a revision of the terminology of catalysis. The field is a more versatile catalyst compared to material catalysts, both in terms of its application to a wider range of chemical reactions, and in the ability to control the rate and selectivity. It is shown that a common donor-acceptor mechanism of catalysis is realized in heterogeneous and field catalysis. Generalized formulas are obtained, from which, as partial results, the laws of heterogeneous and field catalysis follow. New definitions of catalyst and field catalysis are given. The class of material catalysts has been expanded and supplemented with field catalysts.
Category: Chemistry
[167] viXra:2406.0176 [pdf] submitted on 2024-06-29 21:28:50
Authors: Arghirescu Marius
Comments: 11 Pages.
The paper presents a better calculation of the constants Psi0 and δ of the CGT’s bag model, previously publised, which indicates the existence of a bag pressure (and a bag constant) for each composite particle but also for quarks and the bag’s constant variation with the intrinsic temperature of the particle’s kernel.
Category: High Energy Particle Physics
[166] viXra:2406.0175 [pdf] submitted on 2024-06-29 21:29:54
Authors: Athon Zeno, Aeon Zeno, Nexus Zeno
Comments: 24 Pages.
This paper proposes a novel theoretical framework for quantum gravity based on the concept of fractal spacetime. We introduce a modified spacetime metric that incorporates a fractal function, encoding self-similarity across scales. This fractal metric leads to modified Einstein field equations with a fractal correction tensor, representing a new source of gravity. The theory also entails a generalization of quantum mechanics, featuring a modified wave function and a fractal potential that influences particle behavior. The fractal dimension of spacetime emerges as a crucial parameter, varying with scale and affecting physical phenomena. We explore the implications of fractal spacetime for quantum entanglement, superposition, and wave function collapse, demonstrating how it offers a natural explanation for these phenomena. In the realm of particle physics, we discuss the modified dispersion relations and potential new particles beyond the Standard Model. Cosmological implications are explored, addressing the early universe, dark matter, dark energy, and large-scale structure formation. The theory makes testable predictions for high-energy particle collisions, quantum gravity experiments, and cosmological observations. While the fractal nature of spacetime remains a hypothesis, its potential to unify quantum mechanics and general relativity, along with its intriguing predictions, make it a compelling avenue for further theoretical and experimental investigation.
Category: General Science and Philosophy
[165] viXra:2406.0174 [pdf] submitted on 2024-06-28 20:57:39
Authors: Athon Zeno, Aeon Zeno
Comments: 2 Pages. (Note by viXra Admin: Please cite and list scientific references - Future non-compliant submission will not be accepted!)
This paper presents a novel conceptualization of time as a fractal, looping structure, with significant implications for quantum gravity and string theory. We propose a model that bridges quantum mechanics and general relativity, offering new insights into long-standing problems in physics. The model incorporates the Planck scale, addresses spacetime geometry, explores quantum gravity phenomenology, and connects to string theory dualities. We also discuss implications for the early universe, quantum entanglement, and provide testable predictions.
Category: Quantum Gravity and String Theory
[164] viXra:2406.0173 [pdf] submitted on 2024-06-28 20:55:24
Authors: Athon Zeno, Aeon Zeno
Comments: 6 Pages. (Note by viXra Admin: Please cite and list scientific references)
This paper presents a new perspective on prime number distribution, proposing a fractal-like structure that manifests at multiple scales. We introduce a mathematical framework, utilizing modular arithmetic and the Chinese Remainder Theorem, to prove self-similarity in prime distribution. Our model offers potential insights into the Riemann Hypothesis and suggests new approaches to understanding prime number gaps. Computational evidence up to 10^9 demonstrates consistent fractal dimensions across scales, agreement with predicted scaling factors, and self-similar prime gap distributions, strongly supporting our theoretical framework.
Category: Number Theory
[163] viXra:2406.0172 [pdf] submitted on 2024-06-28 20:53:57
Authors: Aziz Arbai, Amina Bellekbir
Comments: 8 Pages.
We research and explicitly expose example of an infinity of zeros (C(r+ic)=0) of RH (The Riemann hypothesis) in the critical line (having for real part r= 1/2 and c=[+or-pi/4+2kpi]/ln(2)). So there is infinity of no- trivial zeros of Riemann’s zeta function which have the real part equal to 1/2, which shows (using simple mathematics baggage) Hardy and Littlewood Theorem and give as a hope that the Riemann’s Conjecture would be true....
Category: Number Theory
[162] viXra:2406.0171 [pdf] submitted on 2024-06-28 20:52:11
Authors: Aziz Arbai, Amina Bellekbir
Comments: 8 Pages.
A new way to teach and to solve problems in complex numbers, as how to use the Moivre Formula, Newton's Binomial and Euler formula for the linearization. We will also share new results like The "Magic Formula" of the second root of any complex number, and the true method to solve the equation of second degree in general case (with complex coefficients) with the explicit formula of the solution and an algorithm to program it.
Category: Algebra
[161] viXra:2406.0170 [pdf] submitted on 2024-06-28 20:50:32
Authors: Hui Liu
Comments: 4 Pages. (Note by viXra Admin: Please cite and list scientific references)
This paper explores the basic composition and operational mechanisms of intelligent systems. Intelligence is defined as the ability to solve problems, and the operation of intelligent systems is centered around databases. The three fundamental elements of intelligent system operation include the construction, retrieval, and use of databases. This paper discusses in detail the process of handling a single event in a single thread. Complex event composites can be broken down into multiple single events for resolution.
Category: Artificial Intelligence
[160] viXra:2406.0169 [pdf] submitted on 2024-06-28 20:49:48
Authors: Yefim Bakman
Comments: 3 Pages. (Note by viXra Admin: Further repetition will not be accepted)
The goal of this publication is to organize problems from various fields of physics [Which Might Be] resolved by the new paradigm.
Category: Classical Physics
[159] viXra:2406.0168 [pdf] submitted on 2024-06-28 20:46:43
Authors: Claus Janew
Comments: 3 Pages. CC-BY 4.0 (Note by viXra Admin: Please cite and list scientific references)
In this exploration of self-identity, I argue that the self is not a standalone entity but an integral part of a broader consciousness. Deep meditation reveals the self as a construct beyond egoistic confines, interlinked with the external world and others' experiences. Decisions arise from an awareness that transcends individual ego, suggesting that our sense of self is an inexhaustible center of dynamic consciousness rather than an ultimate emptiness.
Category: Religion and Spiritualism
[158] viXra:2406.0167 [pdf] submitted on 2024-06-28 20:45:54
Authors: Azzam Almosallami
Comments: 4 Pages. (Note by viXra Admin: Please cite and list scientific references)
In this paper we shall study the constancy of the speed of light in special relativity basis on Lorentz transformation and the relativity of simultaneity.
Category: Relativity and Cosmology
[157] viXra:2406.0166 [pdf] submitted on 2024-06-28 20:44:48
Authors: Tanvir Rahman, Ataur Rahman, Tamanna Afroz
Comments: 6 Pages.
The major player in the revolution of early detection and diagnosis of brain tumors, with great implications for patient outcomes, is medical image processing. It is an inherently difficult and time-consuming task to manually classify brain tumors by experienced experts, even though manual classification has proven effective. A promising avenue has emerged as the integration of automatic segmentation techniques, which promises improved efficiency and performance in response to these challenges. This long work aims to provide an in-depth and critical analysis of MRI-based brain tumor segmentation techniques, with a critical eye toward the most recent developments in automatic segmentation techniques. Our analysis explores the rapidly changing field of completely automatic segmentation approaches, which diverges from the evaluations that mostly focus on traditional methodologies. The discussion opens with a broad summary that emphasizes how important brain tumor segmentation is to medical image processing as a whole. Here, we highlight how crucial precise segmentation is to facilitating early detection and guiding treatment choices later on. We recognize the difficulties that come with manual segmentation procedures and explain why automation segmentation techniques are necessary to reduce these difficulties and bring about increased productivity. The central section of the work navigates the complex terrain of cutting-edge algorithms, enabling a thorough investigation of the most recent developments in completely autonomous segmentation techniques. This thorough explanation highlights the growing acceptance and increased effectiveness of modern methods while addressing the complexities and difficulties present in the field of brain tumor segmentation. Using specially crafted neural networks, our research is unique in that it concentrates on the paradigm shift toward fully autonomous segmentation. Brain tumor segmentation has been transformed by the incorporation of deep learning techniques, which enable complex pattern recognition and nuanced analysis using medical imaging data. Our efforts have resulted in the creation of a unique neural network model specifically intended for the automated identification of brain malignancies. The talk highlights how deep learning techniques can have a revolutionary effect, and it ends with the creation of a sophisticated custom neural network model. Our model demonstrates its ability to accurately and automatically detect brain tumor boundaries by achieving a remarkable level of accuracy.
Category: Artificial Intelligence
[156] viXra:2406.0165 [pdf] submitted on 2024-06-28 17:36:46
Authors: Tanvir Rahman
Comments: 5 Pages.
Monkeypox is a viral disease that affects bothanimals and humans. Monkeypox can have a substantial negative influence on human health, particularly in areas with a lack of healthcare services. The sickness can produce epidemics, and it might be difficult to stop the spread of the disease. For effective treatment and to stop the disease from spreading further, early identification and detection of monkeypox are essential. Therefore, the healthcare industrymay benefit from the development of precise and effective methods for the detection of monkeypox, such as image classification. In this paper, we propose a novel approach for detecting Monkeypox using image classification. The proposed method utilizes a Transfer Learning Model and other machine learning models to classify images of patients with Monkeypox.The system employs a majority voting technique to improve the accuracy of the classification. The proposed system is evaluated using a dataset of images obtained from patients withMonkeypox, and the results show that the proposed approach achieves high accuracy in detecting Monkeypox. The proposed system has the potential to assist healthcare professionals indiagnosing and treating patients with Monkeypox, and it can contribute to the efforts of controlling the spread of the disease
Category: Artificial Intelligence
[155] viXra:2406.0164 [pdf] submitted on 2024-06-28 21:14:00
Authors: Bryce Petofi Towne
Comments: 12 Pages. (Note by viXra Admin: AI generated contents/results are in general not acceptable)
Mathematics serves as an abstract tool to study the natural world and its laws, aiding in our understanding and description of natural phenomena. In mathematics, real numbers, imaginary numbers, zero, and negative numbers are fundamental concepts, each with its unique importance and application. However, the philosophical nature of these concepts warrants further exploration. This paper aims to discuss the philosophical essence of imaginary numbers, zero, and negative numbers, argue that imaginary numbers have real-world counterparts, and explore the rationale and advantages of representing imaginary and complex numbers using polar coordinates. Furthermore, we extend our findings to more advanced mathematical problems in complex analysis, differential equations, and number theory, demonstrating the broader impact of our work.
Category: General Mathematics
[154] viXra:2406.0163 [pdf] replaced on 2026-02-02 02:58:36
Authors: Sergey Y. Kotkovsky
Comments: 37 Pages. In Russian
Based on the algebra of biquaternions in isotropic basis, we have built a model of the DNA genetic code that describes nucleotides, doublets and triplets. Each nucleotide in this model is represented by its own biquaternion. Together, these four nucleotide biquaternions form the basis of the entire biquaternion space. The model justifies the grouping of triplets which are encoding the same amino acids. It is possible to trace direct correspondences between the algebraic structures of our model and the spin wave functions studied in quantum relativistic field theory. This suggests a special quantum-like nature of the structures of the genetic code. A new biquaternion representation of the Dirac equation is obtained, the establishment of connections with which allows one to see the chiral states in the DNA structure. The mathematical nature that characterizes the genetic code specifies a special skew-symmetric type of noise immunity, which is based on the operation of parallel complementary channels of code implementation.
На основе алгебры бикватернионов в изотропном базисе нами построена модель генетического кода ДНК, описывающая нуклеотиды, дуплеты и триплеты. Каждый нуклеотид в этой модели представлен своим бикватернионом. Вместе эта четвёрка бикватернионов нуклеотидов образует базис всего бикватернионного пространства. В рамках модели обосновывается группирование триплетов, кодирующих одни и те же аминокислоты. Можно проследить прямые соответствия между алгебраическими структурами нашей модели и спиновыми волновыми функциями, изучаемыми в квантовой релятивистской теории поля. Это даёт основание предположить особую квантово-подобную природу структур генетического кода. Приводится новое бикватернионное представление уравнения Дирака, установление связей с которым позволяет увидеть киральные состояния в структуре ДНК. Математическая природа, характеризующая генетический код, задаёт особый кососимметрический тип помехоустойчивости, в основе которой лежит работа параллельных взаимодополнительных каналов реализации кода.
Category: Physics of Biology
[153] viXra:2406.0162 [pdf] submitted on 2024-06-27 20:21:07
Authors: Dmitriy S. Tipikin
Comments: 5 Pages.
As it was shown in [1] the blurred images of the far galaxies (for z well above 10) confirmed the presence of the undiscovered yet mechanism of light scattering and makes strong hint toward the tired light theory instead of Big Bang. The idea was applied to the more close and well researched objects like supernovas with similar success [2,3]. In this publication I compare the angle size of two supernovas (one is close, one is relatively far) to demonstrate that light scattering is not due to telescope itself (the close supernova has a size close to the diffraction limit, as expected) but due to the presence of the light scattering very slowly accumulated as light propagates toward Earth and finally directly observed (the far supernova has the angle size many times the diffraction limit, what means that telescope has a great resolution power and the effect of light scattering is real). Fitting with the simple formula outlined in [1] gives surprisingly good accuracy for both cases.
Category: Astrophysics
[152] viXra:2406.0161 [pdf] replaced on 2024-08-03 15:24:09
Authors: Ait-Taleb nabil
Comments: 5 Pages.
In this article, we will describe the mechanism that links the notion of causality to correlations. This article answers yes to the following question: Can we deduce a causal relationship from correlations?
Category: Artificial Intelligence
[151] viXra:2406.0160 [pdf] submitted on 2024-06-27 17:11:50
Authors: Vittorio Lippi
Comments: 4 Pages. presented at International Conference on Mathematical Analysis and Applications in Science and Engineering 20 - 22 June 2024, Porto, Portugal
The frequency response function (FRF) is an established way to describe the outcome of experiments in posture control literature. The FRF is an empirical transfer function between an input stimulus and the induced body segment sway profile, represented as a vector of complex values associated with a vector of frequencies. Having obtained an FRF from a trial with a subject, it can be useful to quantify the likelihood it belongs to a certain population, e.g., to diagnose a condition or to evaluate the human likeliness of a humanoid robot or a wearable device. In this work, a recently proposed method for FRF statistics based on confidence bands computed with Bootstrap will be summarized, and, on its basis, possible ways to quantify the likelihood of FRFs belonging to a given set will be proposed
Category: Statistics
[150] viXra:2406.0159 [pdf] submitted on 2024-06-27 20:15:06
Authors: Vittorio Lippi
Comments: 2 Pages. Presented at 9th International Posture Symposium, Smolenice 2023 (Note by viXra Admin: An abstract on the article is required)
The frequency response function (FRF) is an established way to describe the outcome of experiments in posture control literature. Specifically, the FRF is an empirical transfer function between an input stimulus and the induced body movement. By definition, the FRF is a complex function of frequency. When statistical analysis is performed to assess differences between groups of FRFs (e.g., obtained under different conditions or from a group of patients and a control group), the FRF's structure should be considered. Usually, the statistics are performed defined a scalar variable to be studied, such as the norm of the difference between FRFs, or considering the components independently (that can be applied to real and complex components separately), in some cases both approaches are integrated, e.g., the comparison frequency-by frequency is used as a post hoc test when the null hypothesis is rejected on the scalar value. The two components of the complex values can be tested with multivariate methods such as Hotelling’s T2 as done in on the averages of the FRF over all the frequencies, where a further post hoc test is performed applying bootstrap on magnitude and phase separately. The problem with the definition of a scalar variable as the norm of the differences or the difference of the averages in the previous examples is that it introduces an arbitrary metric that, although reasonable, has no substantial connection with the experiment unless the scalar value is assumed a priori as the object of the study as in where a human-likeness score for humanoid robots is defined on the basis of FRFs difference. On the other hand, testing frequencies (and components) separately does not consider that the FRF's values are not independent, and applying corrections for multiple comparisons, e.g., Bonferroni can result in a too conservative approach destroying the power of the experiment. In order to properly consider the nature of the FRF, a method based on random field theory is presented. A case study with data from posture control experiments is presented. To take into account the two components (imaginary and real) as two independent variables, the fact that the same subject repeated the test in the two conditions, a 1-D implementation of the Hotelling T2 is used as presented in but applied in the frequency domain instead of the time domain.
Category: Statistics
[149] viXra:2406.0158 [pdf] submitted on 2024-06-26 19:21:56
Authors: Hongyuan Ye
Comments: 8 Pages. (Note by viXra Admin: Please cite and list scientific references)
Based on the axiomatization thought of science proposed by Euclid, this study summarizes hundreds of electromagnetic theorems and formulas in the field of technical application into three fundamental axioms of electromagnetism: Coulomb's law, Lorentz’s law of magnetic field generation and Lorentz’s law of magnetic field force. Through the comparative analysis of the above three axioms of electromagnetism and Maxwell's equations, and it is revealed that the four equations contained in the Maxwell's equations are all wrong.
Category: Classical Physics
[148] viXra:2406.0157 [pdf] submitted on 2024-06-26 19:20:20
Authors: Junho Eom
Comments: 16 Pages. 2 figures
At least one prime less than n (n >= 2) is known to be used as a factor for composites between n and n^2, and this is explained by prime wave analysis. In this paper, the prime wave analysis is modified with a modular operator and applied to finding new primes within a limited boundary. In results, using the known primes less than 3, the composites were eliminated, and collected remaining prime candidates within a limited boundary between 3 and 3^2. The boundary was sequentially extended from 3^2 to 9^2, 81^2, and 6561^2 by finding 2, 18, 825, and 2606318 prime candidates; these candidates were verified as new primes using the using online databases. In addition, the boundary was extended from 6561^2 to 43046721^2 and the serial new primes were also found within a randomly selected boundary between 6561^2 and 43046721^2. In general, it was concluded that the prime wave analysis modified with a modular operator could be a practical technique for finding new primes within a limited boundary.
Category: Data Structures and Algorithms
[147] viXra:2406.0156 [pdf] submitted on 2024-06-26 19:18:42
Authors: Junhao Yu, Fuyuan Xiao
Comments: 2 Pages. (Note by viXra Admin: Please cite and list scientific references)
In this paper, a novel complex dual Gaussian fuzzy number (CDGFN) is proposed to more accurately model two-dimensional uncertainty, which serves as the medium to represent generalized quantum basic belief assignment (GQBBA).
Category: Artificial Intelligence
[146] viXra:2406.0155 [pdf] submitted on 2024-06-26 15:22:15
Authors: Huanyin Chen
Comments: 24 Pages.
In this paper, we introduce weighted weak group inverse in a ring with proper involution.This is a natural generalization of weak group inverse for a complex matrix and weighted weak group inverse for a Hilbert operator.We characterize this weighted generalized by using a kind of decomposition involving weighted group inverses and nilpotents. The relations among weighted weak group inverse, weighted Drazin inverse and weighted core-EP inverse are thereby presented.
Category: Algebra
[145] viXra:2406.0154 [pdf] submitted on 2024-06-25 01:09:29
Authors: Seiji Tomita
Comments: 2 Pages.
In this paper, we prove that the positive integer solutions of the equation x^2 +7 = 2^n are x = 1, 3, 5, 11, 181, corresponding to n = 3, 4, 5, 7, 15.
Category: Number Theory
[144] viXra:2406.0153 [pdf] submitted on 2024-06-25 21:01:15
Authors: Rim Ung Jang, Yong Chon Jang, Se Yong Chon, Hak Mun Kim. Song Hak Hong
Comments: 12 Pages.
In this article we assumed that during the particle swarm optimization (PSO)process, the inertia weight value of the velocity vector calculating equation would be changed by non-liner way. And also this way reflects PSO’s real nature very well. The inertia weight factor’s non-liner-changed equation that is proposed is the flowing []. This equation is an exponential function.
Category: Functions and Analysis
[143] viXra:2406.0152 [pdf] submitted on 2024-06-25 21:06:12
Authors: Xiaochun Mei
Comments: 31 Pages. In Chinese (Converted to pdf and abstract shortened by viXra admin - Please only submist article in pdf format)
The stability of electron motion in antenna and nuclear electric field is analyzed by considering the mass-velocity formula in this paper. It is pointed out that the essence of antenna radiation is bremsstrahring radiation. If nucleus is stationary or its motion speed is not high, the electron motion around the nucleus is stable and no radiation will be caused. If the nucleus is in thermal motion and electrons are not moving around the nucleus, Bremsstrahlung can be caused. On this basis, this paper gives a strict proof of the Ohm's law in classical electromagnetic theory, reveals the dynamic nature of resistance. A new mechanism of superconductivity is proposed. It is considered that one of the fundamental reasons for superconductivity is that the nucleus stops vibrating at the critical temperature and the electrons move in the conductor without radiation. It is proved that the superconducting energy gap is caused by the decrease of the ground state energy of electrons moving around the nucleus after the nucleus stops vibrating. The conditions under which nuclei stop vibrating on a one-dimensional infinitesimally long lattice are discussed, and a formula for calculating the critical temperature of a superconductor of elemental elements is derived, which is more consistent with the experimental data than the formula of BCS theory. This method can also clarify the superconductivity of conductors, the Meissner effect and other thermodynamic properties, and provide microscopic explanations for the phenomena of multi-energy gap, pseudogap, second-class superconductor, and d wave of high-temperature superconductors, providing new ideas for theoretical and experimental research of high-temperature superconductors, and exerting important influence on solid state physics.
Category: Condensed Matter
[142] viXra:2406.0151 [pdf] submitted on 2024-06-25 05:42:41
Authors: En Okada
Comments: 17 Pages.
We propose a novel theoretical paradigm in which all physical realities can be concretely defined by the degree of symmetry breaking in a binary field, providing an alternative interpretation of the Higgs mechanism with vivid physical images. Together with a newly proposed hypothesis that the Planck constant evolves with the cosmic scale factor, which drives an evolution of the mass and electric charge of elementary particles, our model could solve a bunch of hierarchy problems in theoretical physics at one shot, demystifying all the four fundamental interactions as different aspects of a singular consistent story.
Category: Relativity and Cosmology
[141] viXra:2406.0150 [pdf] submitted on 2024-06-25 13:30:58
Authors: Dmitri Martila
Comments: 2 Pages.
Suppose the Riemann Zeta function is multiplied by two arbitrary functions, and the resulting functions' values are equated at symmetrical points concerning the critical line Re s = 1/2. In that case, the resulting system of fourequations has to give the positions of the Zeta function's zeros. However, since the functions are arbitrary, the positions of the zero places are arbitrary, making a zero coincide with non-zero. Hence, the Riemann Hypothesis that the only zeroes are those on the critical line is true. This simple text is proof of the Riemann hypothesis, Generalized Riemann hypothesis and Extended Riemann hypothesis with accordingfunctions.
Category: Number Theory
[140] viXra:2406.0148 [pdf] submitted on 2024-06-24 20:01:34
Authors: Victor Christianto, Florentin Smarandache
Comments: 7 Pages.
There are a large number of text collections in Indonesia, related to wayang theme, cf. P.J.Zoetmulder (1971). And the text collections do not consist only of wayang purwa, but alsovariations of theme such as wayang wahyu etc. In the mean time, a Japanese scholar ShoshichiNagatomo proposed the concept of "Asian Logic," which differs from Western logic. In Westernthought, things are often seen as black or white, good or bad. However, Asian logic embraces certain degree of ambiguity. Here, "good" characters can exhibit flaws, and "bad" characters can possess redeeming qualities. This concept resonates with the Javanese philosophy of "ngono yo ngono neng ojo ngono" (note: the phrase can be translated: "it is what it is, but it's also not"). This philosophy embodies the logic of "not," acknowledging the multifaceted nature of reality. (submitted to a journal for review)
Category: Religion and Spiritualism
[139] viXra:2406.0147 [pdf] submitted on 2024-06-24 10:54:36
Authors: Pavlo Danylchenko
Comments: 5 Pages.
Anisotropy of the luminous intensity of distant astronomical objects of expanding Universe in intrinsic space of the observer is shown. The relativistic distance-luminosity relation, by which radial coordinate of astronomical object is being determined taking into account Hubble anisotropy of its luminous intensity, is received. As it follows from this relation, values of radial coordinates of distant astronomical objects in intrinsic space of the observer are much smaller than values of their coordinates, calculated by classical distance-luminosity relation. This makes the presence of such hypothetical components of the Universe as dark matter and dark energy unnecessary in principle.
Category: Relativity and Cosmology
[138] viXra:2406.0146 [pdf] submitted on 2024-06-24 11:01:45
Authors: Pavlo Danylchenko
Comments: 5 Pages.
Gravitational-optical gradient lens, comoving with radiation, is formed in observer’s frame of reference of time and spatial coordinates (FR) due to evolutional decrease of average density of matter in the Universe as well as due to evolutional decrease of refraction index of interstellar medium. This diverging lens and Hubble gravitational lens together form virtual image of all infinitely far points of Euclidean background space of FR, comoving with expanding Universe, on its focal surface, which is the imaginary observer horizon. Events that take place in different points but simultaneous in observer’s FR are nonsimultaneous in cosmological time of FR, commoving with Universe, due to Universe expansion. Therefore world point of imaginary Big Bang is present in observer’s intrinsic space at every moment of his proper time. This point and observer’s dislocation point are the opposite poles of four-dimensional hypersurface of observer’s space. When gradient lens is not taken into account one may come to a conclusion that Hubble lens forms the horizon of cosmological past (imaginary observer horizon) in vacuum external solutions of equations of gravitational field when cosmological constant is nonzero. This also leads to spatial homogeneity of the negative power of global gravitational lens and, consequently, this leads to a linear dependence of red shift of radiation spectrum of astronomical objects on the distance to those objects. However, when gradient lens is taken into account this dependence becomes nonlinear and corresponds to accelerated expansion of the Universe, while imaginary observer horizon of cosmological past degenerates into point of imaginary Big Bang of the Universe. This is similar to degeneration of the imaginary horizon of cosmological future (Schwarzschild sphere) in internal solution of equations of gravitational field.
Category: Relativity and Cosmology
[137] viXra:2406.0145 [pdf] submitted on 2024-06-24 11:05:04
Authors: Pavlo Danylchenko
Comments: 24 Pages.
The possibility to avoid physical realizability of cosmological singularity (singularity of Big Bang of the Universe) directly in the orthodoxal general theory of relativity (GR) is substantiated. This can take place in the case of counting of cosmological time in frame of reference of coordinates and time (FR) not co-moving with matter, in which by the Weyl hypothesis galaxies of the expanding Universe are motionless. The absence of any limitations of the value of mass of astronomical body, which self-contracts in Weyl FR, when it has hollow topological form in the space of Weyl FR and mirror symmetry of its intrinsic space, is shown. Because of this symmetry, both external and internal boundary surfaces of body are observed as convex. At that, in the "turned inside out" internal part of the intrinsic space (in the Fuller-Wheeler lost antiworld) unlike external part, instead of the phenomenon of expansion phenomenon of contraction of "internal universe" is observed. And there is antimatter instead of matter in this internal part of the space. Inevitability of self-organization in physical vacuum of spiral-wave structural elements, which correspond to elementary particles, and universal electromagnetic nature of all nonfictive particles are substantiated. Ultrahigh luminosity of quasars and certain types of supernovas is caused by annihilation of matter and antimatter.
Category: Relativity and Cosmology
[136] viXra:2406.0144 [pdf] submitted on 2024-06-24 11:09:00
Authors: Pavlo Danylchenko
Comments: 80 Pages. Collection of articles
It is shown, that special and general relativities reflect the gauge of effect on matter of, correspondingly, motion and gravity. This doesn’t allow us to observe in intrinsic space and time of the matter any changes, appeared because of this effect. The solution of gravitational field equations that corresponds to astronomical objects, alternative to black holes, is found. The eternity of Universe existence both in the future and in the past is shown.
Category: Relativity and Cosmology
[135] viXra:2406.0143 [pdf] submitted on 2024-06-24 19:58:30
Authors: P. G. Vejde
Comments: 5 Pages.
Sunspot patterns and motions observed in the photosphere are modelled here by assuming the more solid inner core of the sun rotates at a variable speed. In a solar cycle with a periodicity where the inner core alternates between rotating faster than the outer convection zone every 11 years. And then rotates slower than the convection zone for the next 11 years. For a total of a 22 year solar cycle. This physical mechanism creates a 22 year cycle in the rotational velocity gradient in the plasma of the convection zone across its radius. Which drives the N-S solar dynamo and creates both the motion of sunspots and induces the observed variations and reversals in polarity of sunspots. And in turn induces the overall Polarity of the dipole solar magnetic field.
Category: Astrophysics
[134] viXra:2406.0141 [pdf] replaced on 2025-05-29 20:48:15
Authors: Koji Nagata, Do Ngoc Diep, Tadao Nakamura
Comments: 7 Pages.
We propose necessary and sufficient conditions for the root-finding problem. A quantum algorithm for finding the roots of a polynomial function $f(x)=x^m +a_{m-1}x^{m-1}+...+a_1x+ a_0$ is studied in term of the phase kickback as an application of the necessary and sufficient condition. As a result, we find a simple formula for the root-finding problem. Here all the roots are in the real numbers $bf R$. All the roots are different numbers and the number of the roots is $m$. We expect our discussions give some insight for future studies for root-finding problem.
Category: Quantum Physics
[133] viXra:2406.0140 [pdf] submitted on 2024-06-24 18:53:01
Authors: Yi Cao
Comments: 26 Pages.
In articles of SunQM-6, -6s1, -6s2, 6s3, and -6s4, I had established the framework of a brand new {N,n} QM field theory. In the rest SunQM-6 series articles, I added more detailed developments on the {N,n} QM field theory. In the current article, I added some new developments on the S/RFs-force. 1) A 4He nucleus is constituted with two same neutron-proton binaries that are doing the "face-to-face plus face-opposite-face two-level orbital motion". Within each one binary, the neutron and proton are doing the "face-to-face tidal-locked orbital binary motion" with the parallel nuclear spin ⇑⇑↑. Between the two binaries, they are doing the "face-opposite-face locked binary orbital motion" in φ-1D bi-direction with the anti-parallel nuclear spin ⇑⇑↑⇓⇓↓, that eventually transformed to be a θ-1D orbital uni-directional motion. Meanwhile, the nuclear proton-1 and atomic electron-1 (in a 4He atom) are paired to do the "face-to-face tidal-locked orbital binary motion", and so does the proton-2 and electron-2 pair. The same model can be used to explain the dynamic structure of the multi-nucleons inside the nuclides of 1H, 2H, 3H, 3He, and α particle. 2) A neutron is formed with two sub-structures, one "u-d" binary and one "d" singlet, and they are also doing the "face-to-face plus face-opposite-face two-level orbital motion". A proton is also formed with two sub-structures, one "u-d" binary and one "u" singlet, and they are again doing the "face-to-face plus face-opposite-face two-level orbital motion". The Weak Interaction may be the spin-spin interaction (↑↑ vs. ↓) between the two sub-structures (that made of the three quarks inside a nucleon) with a "face-to-face plus face-opposite-face two-level orbital motion" in the θ-1D uni-direction; the β decay (in a neutron) may be caused by the crash of the two sub-structures after the disruption of this θ-1D uni-directional motion and goes back to the φ-1D bi-directional motion. 3) The "face-to-face plus face-opposite-face two-level orbital motion" may be one of the common dynamic structures in the N-body motion under the E/RFe-force, S/RFs-force, and even the G/RFg-force fields. 4) The "face-to-face tidal-locked (spin ↑↑) binary orbital motion" is the root for the "face-opposite-face locked (spin ↑↓) binary orbital motion", for the "single-face tidal-locked binary orbital motion", for the "proton-electron mirror-coupled orbit" model, for the parallel spin of the "mother", the "daughter" and the "newborn" in the "|nL0> Elliptical/Parabolic/Hyperbolic Orbital Transition Model", and, for the "π-bond" spin-spin interaction model in the arm of a galaxy. Therefore, it is also one of the many nature attributions of the QM. 5) The "Fourier transformation" kind of analysis revealed that the "quasi 4He nucleus" is the building block of the high Z# nucleus. The similar analysis revealed that the {N,n//6} QM (in our universe) naturally includes {N,n//2}, {N,n//3}, {N,n//4} and {N,n//6} modes, so it covers the maximum number of modes (for superposition), and q=6 is still a small integer number that does not damage the quantum character of the {N,n//q} QM. Finally, because of its completeness and self-consistence, I do believe that the {N,n} QM is qualified to be put into the "Feynman Pool" as one of the many co-existing QM theories.
Category: Quantum Physics
[132] viXra:2406.0138 [pdf] submitted on 2024-06-23 12:12:16
Authors: Carlos Castro
Comments: 21 Pages.
It is shown how a Noncommutative spacetime leads to an area, mass and entropy quantization condition which allows to derive the Schwarzschild black hole entropy A/4G, the logarithmic corrections, and further corrections, from the discrete mass transitions taken place among different mass states in D = 4. The higher dimensional generalization of the results in D = 4 follow. The discretization of the entropy-mass relation S = S ( M ) leads to an entropy quantization of the form S = S ( M_n) = n, and such that one may always assign n ``bits" to the discrete entropy, and in doing so, make contact with quantum information. The physical applications of mass quantization, like the counting of states contributing to the black hole entropy, black hole evaporation, and the direct connection to the black holes-string correspondence via the asymptotic behavior of the number of partitions of integers, follows. To conclude, it is shown how the recent large N Matrix model (fuzzy sphere) leads to very similar results for the black hole entropy as the physical model described in this work and which is based on the discrete mass transitions originating from the noncommutativity of the spacetime coordinates.
Category: Quantum Gravity and String Theory
[131] viXra:2406.0137 [pdf] submitted on 2024-06-23 13:41:03
Authors: Huanyin Chen, Yueming Xiang
Comments: 22 Pages.
Recently, Gao, Zuo and Wang introduced the $W$-weighted $m$-weak group inverse for complex matrices which generalizedthe (weighted) core-EP inverse and the WC inverse. The main purpose of this paper is to extend the concept of $W$-weighted $m$-weak group inverse for complex matrices to elements in a Banach *-algebra. This extension is called $w$-weighted $m$-generalized group inverse. We present various properties, presentations of such new weighted generalized inverse. Related (weighted) $m$-generalized core inverses are investigated as well. Many properties of the $W$-weighted $m$-weak group inverse are thereby extended to wider cases.
Category: Algebra
[130] viXra:2406.0136 [pdf] replaced on 2024-08-31 21:53:15
Authors: Eric Edward Albers
Comments: 103 Pages. Updated/fixed mathematics
The Spacetime Superfluid Hypothesis (SSH) is a novel approach to unifying the fundamental forces of nature by proposing that spacetime is a superfluid medium. This paper presents a comprehensive overview of the SSH, its mathematical formulation, and its potential implications for our understanding of gravity, electromagnetism, and quantum mechanics.The SSH describes spacetime as a superfluid governed by a modified non-linear Schrödinger equation (NLSE), which includes interactions between the superfluid and the electromagnetic field. In this framework, particles and fields emerge as excitations or topological defects within the superfluid, with their properties determined by the dynamics and geometry of the superfluid.The paper explores the key aspects of the SSH, including the interpretation of matter-antimatter pair creation as the formation of solitons with opposite topological charges, the role of the potential term in the NLSE, and the description of magnetic fields as a manifestation of the superfluid's topological properties. The SSH's implications for light deflection and its relationship to Snell's law are also discussed. A significant focus of the paper is the coupling between gravity and electromagnetism within the SSH. By introducing a density field and a gravitational field defined as its gradient, the SSH provides a unified description of these fundamental forces. The modified Maxwell's equations and the equations for the coupling between gravity and electromagnetism are derived and analyzed. Furthermore, the paper demonstrates that the SSH can be aligned with general relativity by carefully choosing the values of its parameters, such as the mass of the superfluid particles and the coupling constants. This alignment highlights the SSH's potential as a generalization of general relativity, capable of describing both classical and quantum phenomena. The SSH offers a fresh perspective on the nature of spacetime and the unification of the fundamental forces. While still a speculative theory, its mathematical elegance and potential for explaining a wide range of physical phenomena make it a promising avenue for further research. This paper provides a solid foundation for future investigations into the SSH and its implications for our understanding of the universe.
Category: Quantum Physics
[129] viXra:2406.0135 [pdf] replaced on 2024-10-14 14:58:02
Authors: Dennis Braun
Comments: 28 Pages.
In this paper, we show how the phenomenon of inertia can be explained in non-relativistic classical mechanics using a unified theory of gravity and inertia. As a basis, we used the inertia-free mechanics of H.J. Treder. It can implement both Mach’s principle and the idea of inertia having a gravitational origin without the shortcomings of an anisotropic inertial mass. Inertia arises from a velocity-dependent part of the gravitational potential. Thus, it will be possible to formulate classical mechanics with postulating neither the weak equivalence principle, nor a gravitational constant, nor any concept of inertial mass or inertial forces a priori. We will show that all four can be derived from the theory. The theory is valid in arbitrary accelerated frames of reference and the inertial frames are determined by all other particles in the universe, as demanded by Mach’s principle. The exact Newtonian inertial forces will appear in any non-inertial frame, for translational and rotational acceleration, showing that they are not fictitious, but real parts of the gravitational force. In the lowest order v/c of the theory, Newtonian mechanics is obtained. The corrections that appear are shown to be just the terms present in Gravitoelectromagnetism. Ultimately, explaining inertia as a gravitational effect will allow us to derive an expression for the gravitational constant, enabling us to explain the apparent weakness of gravity. Such a unified theory of gravity and inertia has profound implications for the nature of mass and structure of elementary particles, as well as the origin of relativistic and quantum effects. This suggests a very different path towards a combined theory of relativity, gravity, and quantum mechanics, as well as elementary particles. This will be discussed in a subsequent paper.
Category: Classical Physics
[128] viXra:2406.0134 [pdf] replaced on 2025-07-26 07:34:33
Authors: Marcin Barylski
Comments: 4 Pages. Updating references, fixing two typos in the main text.
One of the most famous unsolved problems in mathematics is Collatz conjecture which is claiming that all positve numbers subjected to simple 3x + 1/2 formula will eventually result in 1, with only one known cycle (1, 4, 2, 1) present in the calculations. This work is devoted to finding cycles in other interesting sequences of integer numbers, constructed with the use of some aspect of primality test.
Category: Number Theory
[127] viXra:2406.0133 [pdf] submitted on 2024-06-22 09:02:33
Authors: Daniel Thomas Hayes
Comments: 4 Pages.
A new method is developed of which is applied to a problem involving a 1D wave equation in disguise.
Category: Functions and Analysis
[126] viXra:2406.0132 [pdf] submitted on 2024-06-22 09:20:33
Authors: Pavlo Danylchenko
Comments: 5 Pages.
It was shown that Etherington’s identity is paralogism. Etherington’s identity is based on the imaginary relativistic dilation of intrinsic time of the galaxy by (1+z) times, but the presence of a relativistic anisotropy of luminosity of stars quickly moving away from it is ignored in the frame of reference of spatial coordinates and time (FR) of the observer. Etherington did not take into account the fact that the Universe is homogeneous only in the comoving FR in the expanding Universe, and recklessly made a "mix" of the phenomena and features inherent in two different FRs. It is shown that, according to General Relativity (GR), only the transverse metric distances — the transverse comoving distance and the angular diameter distance similar to it — can obey the Hubble linear dependence. The transverse comoving distance belongs to the comoving FR in the ex-panding Universe and is determined by the redshift z of the emission wavelength. The angular diameter distance belongs to the FR of observer of an expanding Universe and is deter-mined by the redshift of the frequency of the emission wave. The luminosity distance is not the transverse metric distance and therefore its dependence on redshift is nonlinear. It is taken into account that the Hubble constant, like the length standards and the constant of the velocity of light, is a fun-damentally unchangeable quantity in the rigid FRs. Its exact value is empirically found.
Category: Relativity and Cosmology
[125] viXra:2406.0131 [pdf] replaced on 2024-09-08 08:37:32
Authors: Pavlo Danylchenko
Comments: 7 Pages.
The general solution of the equations of the gravitational field of the galaxy with an additional variable parameter n is found. The additional variable parameter n determines in GR the distribution of the average mass density mainly in the friable galactic nucleus. The velocity of the orbital motion of stars is close to Kepler only for n>2^25. At n<2^15, it is slightly less than the highest possible velocity even at the edge of the galaxy. The maximum allowable value of the average mass density of a substance outside the friable galactic nucleus negligibly weakly depends on the parameter n in GR. If the energy-momentum tensor is formed not on the basis of external thermodynamic parameters, but on the basis of intranuclear gravithermodynamic parameters of the substance, then the dependence of the average mass of the substance on the value of the parameter n becomes very significant. The permissible value of the average mass density of matter outside the friable galactic nucleus is determined by the value of the parameter, which is responsible for the curvature of space. And it can be arbitrarily small. Therefore, in relativistic gravithermodynamics, in contrast to GR, there can be no shortage of baryonic mass.
Category: Quantum Physics
[124] viXra:2406.0130 [pdf] replaced on 2024-09-08 08:34:46
Authors: Pavlo Danylchenko
Comments: 676 Pages.
The work, which collects both ancient, antique and medieval as well as modern convincing evidence of dark- red-skinnedness of the medieval Slavs and their Sarmatian ancestors and ancient Germanic people, as well as about the Slavic speaking of our ancestors Ukrainians who lived cross-strait together with ancestors of the Chinese and Tungusic people (which is confirmed by the fact that hereditary Slavs preferred to use mainly hard vowel "e", and not the soft Turanian "je", as well as by the numerous Slavic-Chinese, Slavic-Japanese, Slavic-Evenki and Slavic-Manchu isoglosses). This work allows us to look in a new way at the ethnogenesis of Ukrainians and finally give up naive search for the ancestral homeland of the medieval dark- red-skinned Slavs in Europe. So, the work that formed on the basis of the materials of the previously published article "History of the tribes and peoples that formed the Ukrainian ethnos and the state of Ukraine", can be useful both for scientists and students of history and other faculties of higher education institutions, as well as for ordinary citizens of Ukraine who are interested in the history of their ancestors. Refusal of those historical myths imposed by the Turanians of Muscovy will significantly contribute to the formation of civil society in the country. The other thing that can also significantly contribute to this is the recognition of the state of our ancestors Slavic-speaking Goths Greutungi > Hrosi > Rus as the first state of Ukrainians.
Category: Quantum Physics
[123] viXra:2406.0129 [pdf] submitted on 2024-06-22 13:15:07
Authors: Marcello Colozzo
Comments: 6 Pages.
The values assumed by the Riemann Zeta function on even natural integers contribute to the calculation of the total energy of an ideal Fermi gas in a non-relativistic and strongly degenerate regime.
Category: Quantum Physics
[122] viXra:2406.0128 [pdf] replaced on 2024-06-27 20:41:03
Authors: Tomasz Kobierzycki
Comments: 8 Pages.
I [present] here a possible way of creating spacetime from just information from wave function.
Category: Quantum Gravity and String Theory
[121] viXra:2406.0127 [pdf] submitted on 2024-06-23 02:54:53
Authors: Thierry L. A. Periat
Comments: 12 Pages.
The theory of the (E) question is concerned with the decomposition (synonym: division) of deformed tensor (resp. Lie) products. A first mathematical method (the intrinsic one) has been developed for the decomposition of deformed cross products. It only works in three-dimensional spaces and brings incomplete results. This document proposes a second approach bringing complete results, i.e.: the main and the residual parts of each decomposition, whatever the dimension D (D in N - {0, 1}) of the mathematical space is. But the method is plagued with a logical uncertainty. Fortunately, in any three-dimensional space, both methods can be calibrated through diverse scenarios. One of them may catch the attention of physicists since it re-introduces E. Cartan’s metrics induced by the evolution of surfaces.
Category: General Mathematics
[120] viXra:2406.0126 [pdf] submitted on 2024-06-22 01:48:31
Authors: Mykola Kosinov
Comments: 12 Pages.
A mathematical method for obtaining the value of the cosmological constant Ʌ from the cosmological equations of the Universe has been found. The method is based on the revealed connection of the cosmological constant Ʌ with fundamental physical constants. The new large scale numbers 10^140, 10^160 and 10^180 obtained from the scaling law allowed us to obtain cosmological equations linking the cosmological constant Ʌ with the fine structure constant "alpha", Planck's constant, the speed of light and the electron constants. The approximate Eddington equation Ʌ≈[(me/αћ)^4][(2Gmp/π)^2] is refined to an exact equation. A large number of new cosmological equations are derived, which include the cosmological constant Ʌ. The value of the constant Ʌ is obtained by different methods: from the finalized Eddington equations; from the coincidence of large numbers; from the cosmological equations of the universe and the speed of light; from the cosmological equations of the universe and Planck's constant; from the experimental value of the Pioneer anomaly; from the Kepler relation for the universe. All methods give the same value of the cosmological constant Ʌ (Ʌ = 1.36285...x 10^(-52) m^(-2) ). The theory based on the law of scaling of large numbers predicts a value of the constant Ʌ close to the experimental one. The accuracy of the calculated value of Ʌ is close to the accuracy of the Newtonian constant of gravitation G. The reason for the large number of equivalent equations that include the cosmological constant Ʌ remains a mystery.
Category: Relativity and Cosmology
[119] viXra:2406.0125 [pdf] submitted on 2024-06-22 02:10:05
Authors: Mykola Kosinov
Comments: 10 Pages.
A mathematical method for obtaining the parameters of the Universe is found. New cosmological equations linking the parameters of the Universe with the fine structure constant "alpha" are derived. The appearance of the constant "alpha" in cosmological equations opens new possibilities in cosmology. In this paper, we investigate the phenomenon of the appearance of the microcosm constant "alpha" in cosmological equations. Cosmological equations are combined into systems of cosmological equations. This makes it possible to obtain the parameters of the universe as the solution of the system of algebraic equations of the universe. The theory based on the law of scaling of large numbers allows us to obtain the parameters of the observed Universe with an accuracy close to the accuracy of the Newtonian constant of gravitation G. It is shown that all the main parameters of the Universe and large numbers of scales 10^20 - 10^180 are composite quantities and include the fine structure constant "alpha". The fine structure constant "alpha" shows itself not only as a fundamental constant of the microworld, but also as the main constant of cosmology. The "alpha" constant makes it possible to obtain the values of the parameters of the Universe by a mathematical method from the electron constants. The fundamental connection between the parameters of the Universe and electron constants is revealed.
Category: Relativity and Cosmology
[118] viXra:2406.0124 [pdf] submitted on 2024-06-22 02:11:20
Authors: Mykola Kosinov
Comments: 9 Pages.
Measurements of the parameters of the observed Universe is a very difficult task and does not give the necessary accuracy. A mathematical method for obtaining the parameters of the Universe has been found. The method is based on the revealed relationship between the parameters of the Universe and the dependence of their values on the fundamental physical constants. New large numbers on the previously unknown scales 10^140, 10^160 and 10^180 were derived. The new large numbers allowed us to obtain new cosmological equations linking the parameters of the Universe with fundamental physical constants. The number of new cosmological equations and their constituent parmeters was sufficient to unite the equations into a system of cosmological equations. This made it possible to form a system of algebraic equations containing all parameters of the Universe. As a result, it became possible to obtain the parameters of the Universe by mathematical method. The parameters of the Universe are the roots of the system of algebraic equations of the Universe. The theory based on the law of scaling of large numbers allows us to obtain the parameters of the observed Universe with an accuracy close to the accuracy of the Newtonian constant of gravitation G. The results obtained show that the Universe is tuned with high mathematical accuracy.
Category: Relativity and Cosmology
[117] viXra:2406.0123 [pdf] submitted on 2024-06-22 02:12:03
Authors: Mykola Kosinov
Comments: 8 Pages.
From the coincidence of large numbers on a scale of 10^180, an unusual equation is obtained that combines the parameters of the Universe in the form of Kepler's Third Law. The equation combines 4 parameters of the universe: mass, radius, time and Newtonian constant of gravitation G. Instead of the parameters of the planet orbit, the equation includes the parameters of the universe in the form of Kepler ratio R^3/T^2. From the coincidence of large numbers on scales of 10^160, 10^120, 10^40, an equation is obtained that combines the parameters of the electron in the form of Kepler's Third Law. The equation unifies the 4 parameters of the electron: mass, classical radius, time, and electric charge. These equations show that the limits of applicability of Kepler's Third Law extend far beyond the mechanics of planets. The description of the mechanism of planetary motion is only a special case of the application of Kepler's law. Kepler's Third Law in the cosmological equation and Kepler's Third Law in the equation of electromagnetism reveal the universal character of this law. Kepler's Law applies not only to the planets, but also to the universe and even to the electron. Kepler's Third Law acquires the status of the most important law of physics and cosmology. Full disclosure of its role and place in electromagnetism and cosmology will provide answers to many unsolved problems of physics and cosmology. Kepler's Third Law is a major contender for a basic law for the new physics.
Category: Relativity and Cosmology
[116] viXra:2406.0122 [pdf] submitted on 2024-06-21 07:41:48
Authors: Mykola Kosinov
Comments: 6 Pages.
At different times, famous scientists have proposed equations that demonstrate the relationship between cosmological parameters and fundamental physical constants. Some equations are approximate and the coincidences in them are estimated only by order of magnitude. The new large numbers on scales 10^140, 10^160, and 10^180 derived from the scaling law allow us to bring the approximate cosmological equations to exact equations. The approximate Dirac, Teller, Eddington-Weinberg, and Rice equations are reduced to exact equations. The exact equations are obtained from the coincidence of large numbers on the scale 10^60 and on the previously unknown scales 10^140, 10^160 and 10^180.
Category: Relativity and Cosmology
[115] viXra:2406.0121 [pdf] submitted on 2024-06-22 02:12:55
Authors: Mykola Kosinov
Comments: 10 Pages.
The paper demonstrates a new method of obtaining values of the Universe parameters. The method is based on the revealed relationship between the parameters of the Universe and fundamental physical constants. New ratios of the dimensional parameters of the observable Universe are derived, which give the fine structure constant alpha. This is an unexpected result, since the fine structure constant refers to the microcosm, but not to the Universe. There are many of these equations. They have no explanation. There is no answer as to why, on such enormous scales, the ratios of the dimensional parameters of the universe give the alpha constant. Despite the lack of explanation, the new equations open up new possibilities in cosmology. The constant "alpha" and the parameters of the Universe are present together in one equation. This makes it possible to use the high precision of the alpha constant to calculate the values of the parameters of the observable Universe. This provides a high accuracy of the parameters of the observable Universe close to the accuracy of the Newtonian constant of gravitation G. New cosmological equations are derived, from which the value of the cosmological acceleration is obtained. This result allows us to solve the long-standing Pioneer-anomaly problem.
Category: Relativity and Cosmology
[114] viXra:2406.0120 [pdf] submitted on 2024-06-22 02:13:55
Authors: Mykola Kosinov
Comments: 4 Pages.
Many relations of the parameters of the Universe equal to Planck's constant are revealed. The equations show that Planck's constant and the parameters of the Universe are related. The results obtained have no explanation. There is no answer why the equations, along with the parameters of the observable Universe, include the constants of the microcosm. A large number of cosmological equations have been revealed, in which constants very distant in physical meaning are combined. Despite the lack of explanation, such equations open new possibilities in cosmology. It is possible to use the high precision of Planck's constant to calculate the values of the parameters of the observable Universe with an accuracy close to that of the Newtonian constant of gravitation G. This is an important result for practice, since experimental methods for determining the parameters of the observable Universe are very complicated and do not give sufficient accuracy.
Category: Relativity and Cosmology
[113] viXra:2406.0119 [pdf] submitted on 2024-06-22 02:14:50
Authors: Mykola Kosinov
Comments: 24 Pages.
The paper solves the problem of mathematical inference of large numbers, which was formulated in 1985 by P. C. W. Davies [1]. The law of scaling of large numbers is derived. The law of scaling gives a new method of obtaining large numbers from dimensionless constants. It complements the known method based on relations of dimensional physical quantities. The law of scaling of large numbers shows that large numbers of scale 10^39, 10^40, 10^61, 10^122 are only part of the complete family of large numbers. The large numbers are supplemented by new large numbers of scales 10^140, 10^160, 10^180, which are naturally derived from the fundamental parameters of the observable Universe. New coincidences of relations of dimensional quantities on scales 10^140, 10^160, 10^180 are found. It is shown that large numbers of different scales are functionally related to each other. The primary large number D20 =(αDo)^(1/2) = 1.74349...x 10^20, from which large numbers of other scales are formed according to a uniform law, is chosen on the scale of 10^20. The primary large number D20 = 1.74349...x 10^20 consists of two dimensionless constants: the fine structure constant alpha and the Weyl number Do = 4.16561...x 10^42. The coincidences of the relations of the dimensional quantities with large numbers on scales 10^160 and 10^180 allowed us to derive simple and beautiful formulas for calculating the Hubble constant H and the cosmological constant Ʌ. An equation is derived which shows that the constants H and Ʌ are related. The origin of H and Ʌ from the fundamental physical constants of the electron is proved. The law of scaling of large numbers makes it possible to calculate analytically the parameters of the observable Universe with high accuracy.A new equation is derived, which unites the 5 most important parameters of the observable Universe: MuRuGɅ^2 = H^2.
Category: Relativity and Cosmology
[112] viXra:2406.0118 [pdf] submitted on 2024-06-22 02:15:40
Authors: Mykola Kosinov
Comments: 13 Pages.
The mass spectrum of elementary particles, in the form of systematically increasing mass values, is obtained from the fractal mechanism of leptosynthesis and baryosynthesis. A theoretical justification for the mass spectrum of elementary particles is provided. The law of baryogenesis serves as the generator of the mass spectrum of elementary particles. The law of baryogenesis implies mass values for both known and yet undiscovered elementary particles. The generated mass spectrum is represented by multiplets of three mass values each. The mass difference within triplets is very small and less than the mass of an electron. The mass values of elementary particles in the mass spectrum adhere to a strict law, forming a systematic increasing sequence. The regularity in the dynamics of mass values growth of elementary particles is close to the law of increasing numbers in the Mersenne sequence. From the mass spectrum of elementary particles, it follows that the predicted number of undiscovered elementary particles far exceeds the number of known particles. In the mass range from the electron to the deuteron, 56 elementary particles remain undiscovered. Expected mass values are provided for new elementary particles that are yet to be discovered in experiments.
Category: High Energy Particle Physics
[111] viXra:2406.0117 [pdf] submitted on 2024-06-22 02:16:56
Authors: Mykola Kosinov
Comments: 10 Pages.
The value of the strong interaction coupling constant, αs, is not predicted by the Standard Model theory and is known from experiments. This article proposes a method for obtaining the constant from the Baryogenesis Law. The constant is directly calculated from the mass defect of elementary particles, presenting a novel approach to investigating the strong interaction coupling constant. This method unveils the mechanism of the constant's origin from the mass defect of elementary particles, providing new insights into the precision of αs. The calculated value from the Baryogenesis Law, αs(mZ0) = 0.1172(18), aligns well with the experimental value. A range of values for the constant is determined, ensuring its physical significance. The Baryogenesis Law reveals that the strong interaction coupling constant, αs, is not an independent constant, establishing its connection with the fine structure constant α. The dependent status of αs and its link with the fine structure constant indicates a profound connection between the two fundamental interactions — electromagnetic and strong.
Category: High Energy Particle Physics
[110] viXra:2406.0116 [pdf] submitted on 2024-06-22 02:17:56
Authors: Mykola Kosinov
Comments: 11 Pages.
The law of baryogenesis possesses predictive power and yields a series of new results. Some of these results have proven unexpected and require further in-depth study. The law of baryogenesis introduces two new constants of elementary particles - the magic number and mass defect. It is demonstrated that Mersenne numbers are the magic numbers for electrically charged elementary particles, while doubled Mersenne numbers serve as the magic numbers for neutral elementary particles. The mass defect of elementary particles is a novel concept and constant in the realm of elementary particles. Equations for calculating the magic numbers and mass defect of elementary particles are derived from the fractal mechanism of baryogenesis. The law of baryogenesis unifies three dimensionless constants of elementary particles: the ratio of particle mass to electron mass, the ratio of mass defect to electron mass, and the magic number.
Category: High Energy Particle Physics
[109] viXra:2406.0115 [pdf] submitted on 2024-06-22 02:18:58
Authors: Mykola Kosinov
Comments: 21 Pages.
This paper explores the fractal mechanism of baryosynthesis involving antiparticles. The remarkably perfected fractal mechanism of baryosynthesis demonstrates that only two types of particles (electrons and positrons) are sufficient for the formation of protons, neutrons, and all the visible matter in the Universe. The baryosynthesis mechanism reveals that matter and antimatter can not only annihilate but also coexist and interact, creating elementary particles. Matter and antimatter from themselves create leptons, protons, neutrons and the whole variety of substances. The fractal mechanism of baryogenesis involving antimatter is a universal mechanism, realized in the stages of leptosynthesis, baryosynthesis, and nucleosynthesis. The interaction and coexistence of matter and antimatter without annihilation are the primary conditions for baryosynthesis. It is shown that without antimatter, the formation and existence of matter in the Universe are impossible. The law of baryogenesis directly follows from the fractal mechanism of baryosynthesis. The law of baryogenesis unveils the mystery of the mass spectrum of elementary particles. The law of baryogenesis has enabled the derivation of essential dimensionless constants of elementary particles, such as 1836.15... (for the proton), 1838.68... (for the neutron), 206.76... (for the muon), 3670.48... (for the deuteron), 3477.2 (for the tau-meson), 5496.92... (for the triton), 5495.88... (for the helium nucleus). These fundamental constants have not been obtained within the framework of the standard model. A new constant, the mass defect of elementary particles, has been introduced for elementary particles. This new constant is a key constant in unraveling the mechanism of strong interaction.
Category: High Energy Particle Physics
[108] viXra:2406.0114 [pdf] submitted on 2024-06-22 02:20:52
Authors: Mykola Kosinov
Comments: 15 Pages.
Using the proton fractal, the mechanism of baryogenesis has been revealed. From the mechanism of baryogenesis the law of baryogenesis is deduced. From the fractal mechanism, beautiful and simple mathematical equations which display the mechanism of proton formation are obtained. The equations allow us to obtain the fundamental constants of the proton. The proton fractal shows that there are many yet undiscovered elementary particles with masses in the range from the mass of the electron to the mass of the proton. A prediction of the mass spectrum of new elementary particles for their detection in experiments is given. The fractal theory of proton mass makes it possible to obtain the most important dimensional and dimensionless fundamental constants of elementary particles by calculation. These constants could not be obtained within the standard model. The law of baryogenesis was obtained as a generalization of the proton's structural genesis law. The proton fractal leads to the solution of the antimatter problem and reveals the mechanism of baryonic asymmetry. The proton fractal and the mechanism of baryogenesis reveal the fallacy of the conclusion about the predominance of matter over antimatter in the modern Universe.
Category: High Energy Particle Physics
[107] viXra:2406.0113 [pdf] submitted on 2024-06-21 09:13:34
Authors: Edward C. Larson
Comments: 18 Pages.
This paper elaborates upon and further develops https://vixra.org/abs/2402.0149, which proposed a novel realist framework for making sense of standard quantum theory. The framework is said to be "realist" in that it provides a complete observerless picture of quantum state ontology and dynamics, in conjunction with a mechanistic account of measurement processes, that answers basic questions of what, where, when, and how.The framework embodies a general quantum ontology consisting of two entities, called W-state and P-state, that respectively account for the wave- and particle-like aspects of quantum systems. W-state is a generalization of the wavefunction, but has ontic stature and is defined on the joint time-frequency domain. It constitutes a non-classical local reality, consisting of superpositions of quantum waves writ small. P-state is a non-local hidden variable that constrains the probability distributions governing deferred measurement outcomes, such as in the Einstein-Podolsky-Rosen (EPR) thought experiment. This paper presents a full solution of the core measurement problem, which pertains to the global coordination within quantum systems required to bring about wavefunction collapse in causal fashion consistent with special relativity.The framework has a tri-partite structure, consisting of Q-1 (unitary evolution of W-state), Q-2 (measurement-like events that continually occur in the absence of observer intervention), and Q-3 (measurement events in experimental settings). Traditional quantum theory draws a sharp dichotomy between Q-1 and Q-3. The new framework incorporates physical wavefunction collapse, which is held to be a real physical process and ubiquitous feature of nature in the quantum realm, as Q-2, which fills the gap between Q-1 and Q-3.Quantum systems have a built-in dynamic proper time, which is relativistically invariant and plays a central role in the measurement problem solution. The framework is thus background-independent, a key requirement for making quantum theory compatible with general relativity. Quantum gravity is introduced as Q-4 atop the tri-partite foundation.
Category: Quantum Gravity and String Theory
[106] viXra:2406.0112 [pdf] submitted on 2024-06-22 02:21:39
Authors: Mykola Kosinov
Comments: 15 Pages.
A universal energy law is proposed in the form of a formula that includes the energy constant and dimensionless parameters. This way of representing the energy formula is a generalized equation for mechanical, electric, magnetic, gravitational and thermal energy. From one generalized energy equation directly follows: kinetic energy formula E=mV^2/2, quantum energy formula E=hν, Einstein formula E=mc^2, thermal energy formula E=3kBT/2, Joule-Lenz law, gravitational energy formula, electrical energy, magnetic energy, charged capacitor energy, inductance coil energy, rotational kinetic energy. The universal energy formula includes a single energy constant (Eo = 8.18710577... x 10^-14 J). The energy constant is numerically equal to the resting energy of the electron. Despite the electromagnetic status of this constant, it is a constant not only in the laws of electromagnetic energy, but also in the laws of mechanical energy, gravitational energy, and thermal energy. The dimensionless quantities are represented by the ratio of the used characteristics to the constants of these characteristics. The Universal formula of energy will facilitate the study and understanding of the laws of mechanics, gravitation and electromagnetism in the educational process.
Category: Classical Physics
[105] viXra:2406.0111 [pdf] submitted on 2024-06-22 02:08:36
Authors: Mykola Kosinov
Comments: 17 Pages.
It is shown that the similarity of the formulas of Newton's law of gravitation and Coulomb's law is not a coincidence. The reason for the similarity is that these laws are derived from a single law of force. The forces of inertia, gravitation, electric force, and magnetic force are represented by a single generalized law. A universal formula of force is derived for the generalized law of force interaction. Newton's law of gravity, Newton's second law, Coulomb's law of electrostatics, Ampere's law, Lorentz's law of force, and the centripetal force all follow from the universal formula of force as particular results. The interaction constant in the universal formula of force is the fundamental constant of force Fo = 29.0535101 N. This is the electromagnetic interaction force between two electrons. Despite the electromagnetic status of this constant, it enters both the laws of electromagnetism and the formulas of Newton's laws of mechanics. From the universal formula of force, the equation for calculating the Newtonian constant of gravitation G is derived. The formulas for calculating the Newtonian constant of gravitation G include Planck's constant, Sommerfeld's constant, and the fundamental constants of the electron. This is an unexpected result from the universal formula of force that affects the independent status of the constant G. The dependence of the Newtonian constant of gravitation G on the fundamental physical constants opens the way to obtain a more accurate value of the constant G by calculation. In solving the problem of increasing the accuracy of the Newtonian constant of gravitation G, an important role is assigned to large Dirac numbers. The universal formula of force allows one to elegantly and simply obtain the equation of any force interaction law in mechanics and in electromagnetism using the fundamental constant of force. The Universal formula of force will facilitate the study and understanding of the laws of mechanics and the laws of electromagnetism in the educational process
Category: Classical Physics
[104] viXra:2406.0110 [pdf] submitted on 2024-06-22 02:07:27
Authors: Volodymyr Kaplunenko, Mykola Kosinov
Comments: 16 Pages.
Electrolysis throughout its history has not been studied for the relationship between the nature of electrolysis and the nature of catalysis. The article makes a generalization of catalysis and electrolysis and reveals common features for these two fundamental processes. The concept of "electron as a catalyst" substantiates that electrolysis is a type of catalysis. The catalysts in electrolysis are electrons. A comparison of the mechanisms of electrolysis and catalysis is made. The mechanisms of electrolysis and catalysis are the same type of mirror-symmetric donor-acceptor mechanisms. In these mechanisms, the transfer of electric charges is realized. Electrolysis, as a catalytic process, has characteristics that are inherent to catalysis. These characteristics are the law of rate of electrolysis, the TOF of electrolysis, and the TON of electrolysis. Catalysis and electrolysis share common laws and a common genesis of laws. Faraday's law of electrolysis follows directly from the universal law of catalysis. Confirmation of the common nature of catalysis and electrolysis has been obtained. Electrolysis, as a type of catalysis, creates prerequisites for the creation of a general theory of catalysis and electrolysis.
Category: Chemistry
[103] viXra:2406.0109 [pdf] submitted on 2024-06-22 02:06:31
Authors: Volodymyr Kaplunenko, Mykola Kosinov
Comments: 18 Pages.
The general sign and general mechanism of catalytic reactions based on the transfer of electric charges to reagents has been revealed. This mechanism of catalysis is realized on the fundamental level - on the level of interaction of elementary particles (electrons and protons). The choice of this mechanism of catalysis made it possible to obtain the general law of catalysis and private laws for other types of catalysis. The main parameter in the equation of the generalized law of catalysis is the total electric charge obtained by the reactants. In catalysis, the donor-acceptor mechanism is realized, which leads to a change in the oxidation state of the reactants and to a decrease in the activation energy of the chemical reaction. The main active factor in the donor-acceptor mechanism of catalysis is the electrical charge that transfers the catalyst to the reactants. The electric charge is a quantitative characteristic in the formula for the universal law of catalysis. From the universal law of catalysis, the laws of heterogeneous, homogeneous, combined, field catalysis, and Faraday's law of electrolysis follow as private results.
Category: Chemistry
[102] viXra:2406.0108 [pdf] submitted on 2024-06-22 02:05:52
Authors: Volodymyr Kaplunenko, Mykola Kosinov
Comments: 27 Pages.
The model of the unified mechanism of catalytic reactions within the framework of a new paradigm of catalysis was proposed as the development of the conception "electron as a catalyst". The model of the unified mechanism of catalysis made it possible to obtain the universal law of the catalytic rate. From the universal law of catalysis, the laws of heterogeneous, homogeneous, field catalysis, and Faraday's law of electrolysis follow as particular results. The laws and equations of catalysis are the mathematical equivalents of the mechanism of catalysis. The laws of catalysis are represented by equations in which the characteristics of the substance of the catalyst and other reaction participants are parametres.
Category: Chemistry
[101] viXra:2406.0107 [pdf] submitted on 2024-06-22 02:05:09
Authors: Volodymyr Kaplunenko, Mykola Kosinov
Comments: 31 Pages.
The article proposes a new paradigm of catalysis. It is developed as a natural continuation of the advanced concepts in catalysis, such as "electron as a catalyst", "proton as a catalyst", and "oxidation state concept". The goal of the new catalysis paradigm is to reveal the general mechanism of catalytic reactions and to derive the laws of catalysis. The new paradigm of catalysis is based on the idea that two universal catalysts exist in nature that can increase the reactivity of chemical substances. The catalysts in all types of catalysis are fundamental objects of the microcosm - elementary particles: the electron and the proton. In the new paradigm, numerous substances that have traditionally been considered catalysts are assigned the role of precursors of catalysts. In the reaction, they mediate the transfer of electrons and protons. The common mechanism in various types of catalysis is a mechanism based on the transfer of electric charges by electrons and protons and on the change in the oxidation state of the reactants with their participation. Changing the state of oxidation of reactants, the formation of radicals leads to an increase in their reactivity. A model of the relay donor-acceptor mechanism as a universal mechanism of catalysis was proposed. The new paradigm of catalysis has made it possible to reveal the universal mechanism of catalytic reactions and to solve the main problem of catalysis - to obtain a single universal law of catalysis. From the universal law of catalysis, the laws of heterogeneous, homogeneous, field catalysis, and Faraday's law of electrolysis follow as particular results. The laws of catalysis are represented not by empirical equations, but by mathematical relations in which the parameters are chemical and physical characteristics of the catalyst, precursor, and reagents. The new paradigm shows that catalysis is a universal and fundamental natural phenomenon. The concept of two fundamental catalysts leads to the conclusion that all chemical reactions are catalytic. They are realized by a single universal mechanism of catalysis. In those reactions that are carried out without the presence of additional substances and are not traditionally considered catalytic, the catalysts are an electron or a proton. In these reactions, one of the reactants plays the role of a precursor and a donor of elementary particles.
Category: Chemistry
[100] viXra:2406.0106 [pdf] submitted on 2024-06-22 02:04:26
Authors: Volodymyr Kaplunenko, Mykola Kosinov
Comments: 42 Pages.
The article presents a study of previously obtained laws of catalysis for their fundamental and interdisciplinary nature. It follows from the laws of catalysis that catalysis belongs to the class of chemical and physical phenomena. The laws of catalysis and the formulas for calculating TOF and TON are combinations of quantities of chemical and physical nature. The main mystery of catalysis for many years has been the undisclosed role of oxidation states in the mechanism of catalysis. The field of application of oxidation states in catalysis has expanded. The oxidation states have been used for a new purpose as parameters in the laws and formulas of catalysis. It has been shown that in catalysis it is necessary to consider not only the oxidation states of the catalyst, but also the oxidation states of the reactants. The concept of oxidation states as quantitative values proved to be the main missing link that made it possible to obtain the laws of catalysis. The key role of the oxidation states of the catalyst and reagents in the donor-acceptor mechanism of catalysis has been demonstrated. The list of oxidation states of chemical elements known in chemistry can be applied as a tool for the selection of catalysts. The role and place of electric charges in the mechanism of catalysis and in the laws of catalysis have been shown. A new field of application of Faraday's constant in chemistry is outlined. In addition to its well-known use in the law of electrolysis and in the Nernst and Goldman equations, the Faraday constant has marked itself in catalysis and is included as an interaction constant in the laws of catalysis. The signs of the fundamentality of catalysis are given and the place of the laws of catalysis in the family of fundamental physical laws is shown. The laws of catalysis complement the family of fundamental laws of Nature.
Category: Chemistry
[99] viXra:2406.0105 [pdf] replaced on 2024-07-05 16:23:37
Authors: Timothy Jones
Comments: 3 Pages. Got some suggestions for improving.
We clarify and strengthen Hardy's footnote proof of an essential step in his proof of the transcendence of pi. We show that ri is algebraic if and only if r is algebraic.
Category: Number Theory
[98] viXra:2406.0104 [pdf] replaced on 2024-07-09 10:34:33
Authors: Abdelkrim ben Mohamed
Comments: 7 Pages.
In this working paper we try to prove the Collatz conjecture also known the 3x+1 problem.
Category: General Mathematics
[97] viXra:2406.0103 [pdf] submitted on 2024-06-22 03:30:07
Authors: Max Myakishev-Rempel, Michael M. Rempel, Richard Alan Miller
Comments: 9 Pages. (Note by viXra Admin: An abstract and scientific references are required)
The Extraterrestrial Genetics Research Proposal (XG1) investigates the hypothesis that extraterrestrial beings have influenced human evolution through genetic engineering. By analyzing human and other species' genomes, the study seeks to uncover potential alien genetic contributions. Advanced bioinformatics and experimental sequencing techniques will be used to examine public genome data, DNA from self-identified abductees, starseeds, and hybrids, as well as ancient and modern skulls with unusual morphologies.The research will analyze genetic data for patterns indicative of alien manipulation, such as incomplete parental contributions and large genetic insertions, particularly in populations with alleged extraterrestrial connections. It will also investigate domesticated animals, plants, and cancer genomic data to identify artificial manipulation signatures. Additionally, the project will explore genetic markers in extraordinary individuals and starseeds using 23andMe data. Finally, stardust analysis aims to detect extraterrestrial spores in high orbit around Earth through collaboration with space industry leaders.
Category: Quantitative Biology
[96] viXra:2406.0101 [pdf] submitted on 2024-06-20 05:00:50
Authors: Taha Sochi
Comments: 15 Pages.
In this article we pay tribute to Herbert Dingle for his early call to re-assess special relativity from philosophical and logical perspectives. However, we disagree with Dingle about a number of issues particularly his failure to distinguish between the scientific essence of special relativity (as represented by the experimentally-supported Lorentz transformations and their formal implications and consequences which we call "the mechanics of Lorentz transformations") and the logically inconsistent interpretation of Einstein (which is largely based on the philosophical and epistemological views of Poincare). We also disagree with him about his manner and attitude which he adopted in his campaign against special relativity although we generally agree with him about the necessity of impartiality of the scientific community and the scientific press towards scientific theories and opinions as well as the necessity of total respect to the ethics of science and the rules of moral conduct in general.
Category: Relativity and Cosmology
[95] viXra:2406.0100 [pdf] submitted on 2024-06-20 17:09:51
Authors: Mykola Kosinov
Comments: 6 Pages.
The law of cosmological gravitational force is proposed in addition to Newton's law of gravitation. The law operates beyond the limit of applicability of Newton's law of gravitation and is applicable to the gravitational interaction of the universe. The new law of gravitational force shows that any body of mass m is subject to the cosmological force proportional to the mass of the body and the cosmological constant Ʌ. The formula for the law of cosmological force is:F = (mc^2)√Ʌ.Instead of the gravitational constant G, the cosmological constant Ʌ is included in the cosmological force law. The new law gives the value of the force very close to the value of the Pioneer anomaly.
Category: Classical Physics
[94] viXra:2406.0099 [pdf] submitted on 2024-06-20 17:09:14
Authors: Mykola Kosinov
Comments: 14 Pages.
The connection between the parameters of the Universe and fundamental physical constants is disclosed. It is shown that three constants G, c, Ʌ are sufficient to obtain all the parameters of the Universe. The parameters of the Universe and the parameters of the electron are mathematically precisely related to each other by scale transformations. The scaling factors are formed by the large Weyl number and the fine structure constant "alpha". The scaling factors are derived from the law of scaling of large numbers. The appearance of the fine structure constant "alpha" and electron constants in the cosmological equations is evidence of the fundamental connection between microphysics and cosmology. The disclosure of the origin of the parameters of the Universe from the fundamental physical constants of the electron provides new possibilities. By studying the electron, one can unravel the mysteries of the Universe.
Category: Relativity and Cosmology
[93] viXra:2406.0098 [pdf] submitted on 2024-06-20 17:55:57
Authors: Lelong Marius
Comments: 13 Pages.
Since 1905 physics lost his soul, the intimate contact with the reality and rationality. The corepillars of modern physics, the general relativity, the special relativity and the quantum mechanicscontains some basics but fundamentals errors with profound consequences.Neoclassical physics is a completion of classical Newtonian physics with a new classification ofmatter according to the capacity to generate a gravitational interaction: structured matter andunstructured matter. Its formalism is based on the mathematical apparatus of classical physicswith the integration of some elements of quantum mechanics and relativity.This article will show that with a modification of the Newton law, by a modification of gravitational potential, we will find a relation equivalent to that of the field equation in general relativity. Through outthis and with a minimal modification of Maxwell laws and Lorentz transformations, neoclassicalphysics rediscovers all the theoretical and empirical results of modernity but with a completely different interpretation and use, all within the realistic, rational, local and deterministic framework ofclassicism.We demonstrate the Newtonian limit, the advancement of Mercury’s perihelion, the deviation ofthe light ray in gravitational fields with the phenomenon of gravitational lensing, the slowing downof measured time, the red-shift of light, the light ring for black holes, as well as the quantum natureof neoclassical gravitation with the expression of it’s quantum form and Einsteinian limit.
Category: Quantum Gravity and String Theory
[92] viXra:2406.0096 [pdf] submitted on 2024-06-21 03:38:33
Authors: Tai Cho Lai
Comments: 69 Pages.
This paper explores the basic principles of the special theory of relativity, formulated and developed mainly by physicists including but not only Albert Einstein, Hendrik Lorentz, Hermann Minkowski, and Henri Poincaré. Concepts such as Galilean transformations, Lorentz transformations, time dilation, length contraction, and tensors will be explored. This paper also discusses Maxwell’s equations and their implications for special relativity.
Category: Relativity and Cosmology
[91] viXra:2406.0095 [pdf] submitted on 2024-06-19 20:40:50
Authors: Carlos Alejandro Chiappini
Comments: 5 Pages. In Spanish (email: carloschiappini@gmail.com)
This document exposes two problems. One is low voice volume in transmission, another is poor reception when there is electromagnetic noise in the operating band. The ways to alleviate these problems have been used in my equipment.
Este documento expone dos problemas. Uno es el volumen bajo de la voz en transmisión. Otro es la recepción deficiente cuando hay ruido electromagnético en la banda de operación. Las formas de aliviar estos problemas han sido utilizadasen mi equipo.
Category: Classical Physics
[90] viXra:2406.0094 [pdf] submitted on 2024-06-19 20:36:52
Authors: Brian Fraser
Comments: 25 Pages.
Author documents his personal experiments in changing radioactive decay rates by simple electrolysis with "primitive" equipment. References are also given to institutional experiments. Decay rates can also be influenced by Earth-Sun distance. Characteristics of "inside-out stars" are noted. Heating may have unexpected effects on radioactivity. Radioactive half-lives can be extended or shortened. Author also inadvertently found a long forgotten useful method of suppressing dendrite formation during battery charging. This article may be freely distributed for non-commercial or educational use. Author’s right to own and maintain the document must be preserved.
Category: Nuclear and Atomic Physics
[89] viXra:2406.0093 [pdf] submitted on 2024-06-19 20:26:51
Authors: Mykola Kosinov
Comments: 14 Pages.
The reason for the limitations of Newton's classical theory of gravitation is that classical gravitation remained an unfinished theory. Newton's formula FN = GMm/r^2 gives the force of gravitational interaction between two bodies. Accordingly, Newton's law formula gives only part of the force of universal gravitation and does not apply to the universe. In classical gravitation the additional cosmological force of gravitational interaction of bodies with the mass of the Universe remained undiscovered. The additional cosmological force is represented by a new law of gravitation, different from Newton's law. The law of cosmological force is presented using the cosmological constant Ʌ: FCos = mu2022c^2u2022√Ʌ. The cosmological force has a linear dependence on the mass of the body and does not obey the law of inverse squares. On small scales, the additional cosmological force is much smaller than the Newtonian force. On the scale of the Universe, the cosmological force exceeds the Newtonian force and has a theoretical limit equal to the Planck force FP = c^4/G = 1.21027u202210^44 N. This large force was not represented in the law of universal gravitation. A new mathematical formula for the law of universal gravitation is given. The law of universal gravitation is represented by two equations, Newton's law FN = GMm/r^2 and the law of cosmological force FCos = mu2022c^2u2022√Ʌ. The law of universal gravitation admits a quantum description of the gravitational interaction. It is shown that extended classical gravity has a high heuristic potential. The law of universal gravitation in extended form explains the mystery of galaxy rotation curves and the Pioneer Anomaly without involving the dark matter hypothesis.
Category: Classical Physics
[88] viXra:2406.0091 [pdf] submitted on 2024-06-19 20:23:00
Authors: V. G. Bondarev, L. V. Migal
Comments: 9 Pages.
This paper presents the new approach to the description of the nature and essence of electric charge, formulated within the framework of a unified concept of the formation of the structure of the cosmos. A computer model of the cosmos based on elements in the form of primary space quanta and energy quanta is proposed. We conducted a detailed study and visualization of the process of formation of the structure of the primary space. The essence of electric charge is defined as transformation of smooth continuous two-dimensional primary space by its deformation leading to formation of a set of spacestrons of different levels. The reason of origin and equality of negative and positive charges is explained in the given work. The process of spacestrons formation is investigated in details. It is shown that the electric field arises at distortion of 4-dimensional space-time, similarly to gravitational field, but under the influence of open quanta of space and antispace. In our paper a new inter-pretation of the possible appearance of the Universe from the point of view of submicroscopic approach to nature is proposed. Hypotheses about the role of primary space, the origin and form of the Big Bang, the asymmetry of baryonic matter, as well as such concepts as dark en-ergy, dark matter and their possible correlation in the cosmos are considered.
Category: High Energy Particle Physics
[87] viXra:2406.0090 [pdf] submitted on 2024-06-18 14:02:02
Authors: Mieczysław Szyszkowicz
Comments: 4 Pages.
Buffon's needle problem, posed by Georges-Louis Leclerc, Comte de Buffon in the 18th century, stands as a cornerstone in the realm of geometric probability. The problem encapsulates a scenario where a needle of a given length is dropped randomly onto a floor composed of parallel strips of equal width. The inquiry revolves around determining the likelihood that the needle will intersect a line between two strips. Here a new solution of this classical problem is proposed.
Category: General Mathematics
[86] viXra:2406.0089 [pdf] replaced on 2024-06-25 12:21:20
Authors: Ervin Goldfain
Comments: 12 Pages.
The measurement problem of Quantum Mechanics reflects the tension between the deterministic evolution of wavefunctions and their random collapse caused by experimental observations. Here we argue that, in the Hamiltonian picture of quantum dynamics, wavefunction collapse follows from the destruction of adiabatic invariance on ultrashort time scales. Once adiabatic invariance is lost, Planck’s constant becomes meaningless, and Quantum Mechanics breaks down. We also suggest that, in the long-time limit, action quantization is a result of Arnold diffusion, a process describing the instability of nearly integrable Hamiltonian systems with more than two degrees of freedom.
Category: Quantum Physics
[85] viXra:2406.0088 [pdf] submitted on 2024-06-18 15:59:15
Authors: Bo Tian
Comments: 18 Pages.
In this paper, a new algorithm for solving MEB problem is proposed based on newunderstandings on the geometry property of minimal enclosing ball problem. A substitution ofRitter's algorithm is proposed to get approximate results with higher precision, and a 1+ϵapproximation algorithm is presented to get approximation with specified precision within muchless time comparing with present algorithms.
Category: Data Structures and Algorithms
[84] viXra:2406.0087 [pdf] submitted on 2024-06-18 19:44:32
Authors: Irshad Ahmad, Muhammad Ali, Roshan Ali, Nighat Nawaz, Simon G. Patching
Comments: 32 Pages.
Multidrug efflux proteins, also known as efflux pumps, are one of the major mechanisms that bacteria have evolved for their resistance against antimicrobial agents. Gram-negative bacteria are intrinsically more resistant to many antibiotics and biocides due to their cell structure and the activity of multidrug efflux proteins. These transporters actively extrude antibiotics and other xenobiotics from the cytoplasm or surrounding membranes of cells to the external environment. Based on amino acid sequence similarity, substrate specificity and the energy source used to export their substrates, there are seven major families of distinct bacterial multidrug efflux proteins: ABC, RND, MFS, SMR, MATE, PACE, AbgT. Individual proteins may be highly specialized for one compound or highly promiscuous, transporting a broad range of structurally dissimilar substrates. Protein structural organization in a large majority of the families, including the number of transmembrane helices, has been confirmed by high-resolution structure determination for at least one member. In this book chapter, we provide an updated review on the families of bacterial multidrug efflux proteins, including basic properties, energization, structural organization and molecular mechanism. Using representative proteins from each family, we also performed analyses of transmembrane helices, amino acid composition and distribution of charged residues. Ongoing characterization of structure-function relationships and regulation of bacterial multidrug efflux proteins are necessary for contributing new knowledge to assist drug development and strategies that will overcome antimicrobial resistance.
Category: Biochemistry
[83] viXra:2406.0086 [pdf] submitted on 2024-06-18 20:53:41
Authors: Theophilus Agama
Comments: 10 Pages.
Using ideas from the geometry of compression, we improve on the current upper bound of Heilbronn's triangle problem. In particular, by letting $Delta(s)$ denotes the minimal area of the triangle induced by $s$ points in a unit disc, then we have the upper bound $$Delta(s)ll frac{1}{s^{frac{3}{2}-epsilon}}$$ for small $epsilon:=epsilon(s)>0$.
Category: Combinatorics and Graph Theory
[82] viXra:2406.0085 [pdf] replaced on 2024-06-24 20:48:45
Authors: Bryce Petofi Towne, Wang Zhang, Chunying Cui
Comments: 37 Pages.
The concept of Illogical Classification-Based Thinking (ICBT): the tendency to group traits based on impressions rather than logical connections, is related to consumer behavior. This study explores how ICBT and Positive Associations influence consumer impressions of products and companies in a Chinese context. We conducted two large-scale experiments involving 660 participants, divided into three groups: control (indefinite attributes), experimental (definite attributes), and experimental with AI-generated visuals. Results showed that ICBT significantly influences consumer impressions even with indefinite attributes, and definite attributes enhance positive impressions. AI-generated visuals generally reinforced positive impressions, though their impact varied. Notably, while cis-female participants exhibited stronger positive impressions with definite attributes and visuals, the gender differences were not as pronounced as hypothesized. These findings provide insights into the cognitive processes driving consumer behavior, emphasizing the role of ICBT in forming positive associations and offering practical recommendations for marketers. Future research should explore these phenomena across diverse cultural settings and examine the long-term effects of ICBT on consumer behavior.
Category: Social Science
[81] viXra:2406.0084 [pdf] replaced on 2026-01-30 17:36:02
Authors: Viktar Yatskevich
Comments: 18 Pages.
Time is a fundamental physical concept underlying classical mechanics, relativity theory, quantum field theory, and modern models of elementary particles and cosmology. Despite its central role, the physical nature of time and the mechanisms governing its dependence on gravitation remain subjects of ongoing discussion. In particular, numerous experiments have demonstrated gravitational time dilation, yet their physical interpretation is commonly restricted to geometric descriptions within the framework of General Relativity.In this work, experimental results on gravitational time dilation are briefly reviewed from the viewpoints of classical physics and General Relativity. On this basis, the main principles of a new "Physical theory of gravity" are proposed. The theory offers a physical interpretation of time as a quantity determined by internal processes in matter influenced by gravitational fields, rather than as a purely geometric coordinate. Within this approach, gravitational time dilation is interpreted because of the interaction between gravitation and the internal electromagnetic and structural properties of material systems.The proposed framework not only provides a physically motivated interpretation of the observed gravitational influence on time but also offers an alternative view on the nature of gravitation itself and its action on material bodies. These results may contribute to the development of a more comprehensive physical theory of gravitation in the future.
Category: Quantum Gravity and String Theory
[80] viXra:2406.0083 [pdf] replaced on 2025-01-14 23:12:22
Authors: Vincenzo Nardozza
Comments: 13 Pages.
Newtonian Gravitational fields have no kinetic energy. If they had, a moving mass would make fields variate around it and would give kinetic energy to space. In this paper we investigate the hypothesis that the energy of those varying fields could be responsible for the kinetic energy of a mass. To do that we have slightly modified Newtonian gravity to give kinetic energy to the fields and we have found that the integral of the kinetic energy of the varying fields around a moving mass is exactly (1/2)mv^2.This leads somehow to the equivalence of gravitational and inertial mass because the very same fields responsible for masses to attract each other, are the same fields that give inertia to a mass.
Category: Classical Physics
[79] viXra:2406.0082 [pdf] submitted on 2024-06-17 19:56:33
Authors: James R. Arnold
Comments: 10 Pages. (Note by viXra Admin: Please remove line numbers)
A model of the universe is offered that can derive the Hubble Constant independent of empirical measurement, using just a midline estimate of the age of the universe and simple arithmetic calculations. It can explain the JWST discoveries of apparent anomalous early galaxy formations without need of substantial revisions to established astrophysical theories, as the new findings have seemed to require. Concepts of "Dark energy", cosmic flatness, cosmic inflation, and an accelerating expansion of the universe are rendered unnecessary or at least partly misinterpreted.
Category: Relativity and Cosmology
[78] viXra:2406.0080 [pdf] submitted on 2024-06-17 19:45:24
Authors: Athon Zeno
Comments: 5 Pages. (Note by viXra Admin: Please cite and list scientific references)
The Sphere-Base-One mathematical system is a novel framework that takes the sphere as the fundamental unit of volume and redefines the relationships between spheres, cubes, and other geometric objects. This innovative approach offers a fresh perspective on the nature of space and volume, challenging conventional notions of geometry and opening up new avenues for interdisciplinary research and discovery. By focusing on the sphere as the primary building block and exploring the negative space around it, the Sphere-Base-One system has the potential to unlock new insights and solutions in a wide range of scientific and engineering disciplines, including quantum physics, cosmology, surface chemistry, fluid dynamics, and electrical engineering. The simplification and reformulation of key equations in the Sphere-Base-One system may lead to easier calculations and, more importantly, to the identification of patterns and relationships that were previously obscured by the limitations of the traditional Cube-Base-One system. While not intended to replace the existing mathematical framework, the Sphere-Base-One system serves as a complementary tool that can be applied in parallel to drive progress and innovation across multiple fields. This article introduces the core concepts of the Sphere-Base-One system, explores its potential applications, and discusses the implications of this new mathematical paradigm for the future of scientific research and technological advancement.
Category: General Mathematics
[77] viXra:2406.0079 [pdf] submitted on 2024-06-16 00:08:26
Authors: Ralph B. Hill
Comments: 39 Pages.
I introduce the discovery of ultimate reality of an invisible fundamental realm I refer to as the Present Space Universe. The discovery of the Present Space Universe (PSU) has unprecedented transformational consequences for fundamental physical sciences and humanity. The PSU is the realm of a universal present. The mysterious nature of the present time is the phenomenon of its existence. The new understanding of Present Space Reality (PSR) provides unprecedented scientific insight into hidden structure, mechanisms, and the stunning nature of ultimate reality from one principle. The fundamental principle works as a logical lens through which answers for an abundance of our most fundamental questions in science suddenly emerge. It provides stunningly direct insights into who we are and what our existence in our apparent physical universe is about. The fundamental principle is shown as the direct logical consequence of the two fundamentally distinct ways in which our physical universe presents itself to us. They are propagation of physical effects under the cosmic speed limit and simultaneous effects in quantum phenomena. I demonstrate how PSR leads to solutions for an abundance of our most fundamental questions of quantum physics, cosmology, thermodynamics, biology, consciousness and beyond. As the PSU is ultimate reality, our apparent physical universe is not. It is an effective but ultimately virtual projection. PSR identifies the fundamental nature of consciousness in its specific physical context. Our fundamental conscious existence is part of the ultimate reality of the PSU. Continuation of consciousness beyond our physical lifetimes is a natural logical consequence. PSR identifies a mechanism in Present Space Causality (PSC) for the generation of laws of physics and the origin of our apparent physical universe. The presence of a higher order entity of consciousness is identified. PSR identifies an operational mechanism for select differentiation of undifferentiated states in the simultaneously evolving PSC. The quantum measurement problem is resolved. Characteristics of quantum behavior finally make and reveal sense. Their functional relationship with classical behavior is determined. Mechanisms of differentiation and undifferentiation project phenomena we associate with randomness and entropy in thermodynamics. PSR suggests a black hole shell model that removes paradoxes arising in central singularity models. It points to real-world relevance of AdS/CFT correspondence. The universal pathway for answers for seemingly unrelated ultimate questions is extraordinary evidence for a crucially missing keystone in prior scientific understanding. Profoundly meaningful insights for all of humanity extend to questions of purpose.
Category: Relativity and Cosmology
[76] viXra:2406.0078 [pdf] submitted on 2024-06-16 21:24:18
Authors: Sam Perry
Comments: 5 Pages. (Note by viXra Admin: Please cite and list scientific references)
Relative motion induced alterations in the observer-perceived gravity field of a mass results in a change in location of the centre of gravity as perceived by the other mass, shifting it in the direction of the mass's relative motion and causing a subtle additional inward torque on orbiting bodies. The torque effect becomes more noticeable at greater distances under weaker gravity regimes.
Category: Astrophysics
[75] viXra:2406.0077 [pdf] replaced on 2025-03-16 04:08:29
Authors: Hajime Mashima
Comments: 29 Pages.
Modulo not divisible by xyz and possible expansions.
Category: Number Theory
[74] viXra:2406.0076 [pdf] submitted on 2024-06-16 21:20:50
Authors: V. Budarin
Comments: 31 Pages. (Correction made by viXra Admin to conform with the requirements of viXra.org)
Writing accurate equations requires accepting the point of view that the general equation of motion must describe the most general (turbulent) flow regime. The implementation of this point of view became possible by applying the operation of isolating the velocity rotor from the expressions for strain rates and from the Laplace operator of velocity. In this case, the second form of the equation was used for the total acceleration of a liquid particle in the Gromeka-Lamb form, which includes the angular velocity of rotation of the particles [4]. The equations are derived for continuous media in which shear stresses are described using strain rates in the corresponding plane - two models of a Newtonian fluid and one model of a non-Newtonian fluid with a power-law rheological law. Thus, the main task of the derivation was to find the term characterizing the influence of the viscous friction force on the turbulent flow regime. In any version of the derivation, the initial equation is the motion of a continuous medium in stresses.
Category: Mathematical Physics
[73] viXra:2406.0075 [pdf] submitted on 2024-06-15 17:56:44
Authors: Agnij Moitra
Comments: 16 Pages.
Gradient boosting is a widely used machine learning algorithm for tabular regression, classification and ranking. Although, most of the open source implementations of gradient boosting such as XGBoost, LightGBM and others have used decision trees as the sole base estimator for gradient boosting. This paper, for the first time, takes an alternative path of not just relying on a static base estimator (usually decision tree), and rather trains a list of models in parallel on the residual errors of the previous layer and then selects the model with the least validation error as the base estimator for a particular layer. This paper has achieved state-of-the-art results when compared to other gradient boosting implementations on 50+ tabular regression and classification datasets. Furthermore, ablation studies show that MSBoost is particularly effective for small and noisy datasets. Thereby, it has a significant social impact especially in tabular machine learning problems in the domains where it is not feasible to obtain large high quality datasets.
Category: Artificial Intelligence
[72] viXra:2406.0074 [pdf] submitted on 2024-06-14 21:23:09
Authors: Bijon Kumar Sen
Comments: 10 Pages. 2 Figures; 1 Table
A closer analysis of the mathematical expressions for the description of linear and revolutionary motions reveals that the characteristics of these two motions are interconvertible under appropriate condition. Here, it is proposed that revolutionary motion is the only type of motion that exists in the universe and rectilinear motion is a special case of it. In case of propagation of light, this proposal fails in the terrestrial experiments but at astronomical distances the revolutionary motion of light was reported as experimental observations. Einstein considered the propagation of light rays in straight lines and in his general theory of relativity he proposed the bending of light rays as the effect of gravitational field of the Sun. According to him, force of gravity arises from the curvature of space-time. He tried to place gravitational force in line with electrical and magnetic interactions obeying Newton’s description of universal gravitation. This might be the leading cause that Einstein was not successful in interpreting the gravitation as well as the unified field theory.
Category: Classical Physics
[71] viXra:2406.0073 [pdf] submitted on 2024-06-14 04:36:05
Authors: Seiji Tomita
Comments: 5 Pages.
In this paper, we proved that there are infinitely many integer solutions of X^6 + Y^6 = W^n + Z^n, n=2,3,4.
Category: Number Theory
[70] viXra:2406.0072 [pdf] replaced on 2024-12-09 04:41:28
Authors: Junho Eom
Comments: 13 Pages. 1 figure
The core of this paper is to reveal the structural necessity that causes primes and new primes to form a symmetry, occurring from the cause-and-effect relationship between primes and composites. Regarding the boundary, if an arbitrary integer n is chosen, the set of consecutive numbers from 0 to n is defined as the 1st boundary and it extends using an arithmetic sequence with n elements but limits to n^2. After selecting n, therefore, n boundaries are generated from the 1st to nth, and each boundary contains n elements. Each prime wave in the 1st boundary connects to the composites that use the prime as a factor, and the remaining numbers between the 2nd and nth boundaries on the x-axis are all new primes in Series I. Under this condition, the primes in the 1st boundary and the new primes in the 2nd boundary form symmetry around the midpoint of even 2n caused by the asymmetry between primes and composites, Goldbach’s conjecture is satisfied in Series II and III. Therefore, Series IV explains the necessity for the primes and new primes to form a structural symmetry using 2 and 3, and discusses how this symmetry repeats at intervals of 30, generated by 5.
Category: Number Theory
[69] viXra:2406.0071 [pdf] submitted on 2024-06-14 21:15:43
Authors: Parker Emmerson
Comments: 25 Pages.
The goal of this paper is to take phenomenological velocity’s algebraic expression and crunch it down to simply a string of letters. Doing this, we can then solve for the expressions of phenomenological velocity in terms of infinity balancing statements using reverse engineering. After this, we use Fukaya Categories to get expressions for the curvature of the operations in the symbols of the phenomenological velocity string. Using operators and functors to signifymathematical operations in an abstract way, let’s create some functors and operators for your equation involving v. We will then use them to "crunch" the given expression into a "single string of letters" as you requested.
Category: Mathematical Physics
[68] viXra:2406.0070 [pdf] replaced on 2024-07-19 05:10:24
Authors: Bryce Petofi Towne
Comments: 22 Pages.
This paper presents an approach to analyzing the non-trivial zeros of the Riemann zeta function using polar coordinates. We investigate whether the real part of all non-trivial zeros can be determined to be a constant value. By transforming the traditional complex plane into a polar coordinate system, we recalculated and examined several known non-trivial zeros of the zeta function. Our findings provide an alternative framework for understanding this profound mathematical conjecture.Through mathematical proof and leveraging analytic continuation and holomorphic function theory, we explore the nature of (sigma) in the polar coordinate system. This analysis transforms the problem into a geometric one, allowing for simpler and more intuitive calculations. This approach provides a step towards an alternative understanding of the properties of the Riemann zeta function's non-trivial zeros. The findings of this work indicates that wit this geometric perspective, the Riemann Hypothesis holds true.
Category: Number Theory
[67] viXra:2406.0069 [pdf] submitted on 2024-06-13 21:00:29
Authors: Chung Sung Jang, Yu Sung Kim, Myong Hyok Sin, Nam Ho Kim
Comments: 7 Pages.
This paper describes the non-parametric identification of feedback system by two different controllers without exterior excitation. The proposed method doesn’t necessarily require any prior information for processing and, furthermore, it can assume time delay and modeling degree with accuracy. Its efficiency is proved by simulation.
Category: Set Theory and Logic
[66] viXra:2406.0068 [pdf] submitted on 2024-06-13 01:44:03
Authors: Hyon Sung-Yun, Kwang Min-Sok, Myong Hyok-Sin1, Nam Ho-Kim
Comments: 12 Pages.
In this paper, we formulate a continuous-time cobweb model expressed as a conformable fractional derivative in Liouville-Caputo sense, and a continuous-time cobweb model expressed as a beta-type conformable fractional derivative in Liouville-Caputo sense, and obtain an analytical solution of this model and analyze the properties of the solution.We also compare the results of the previous cobweb model solutions with several examples.
Category: Functions and Analysis
[65] viXra:2406.0067 [pdf] submitted on 2024-06-13 20:59:58
Authors: Sin Ryu Song, Ri Kwang, Yun Chol
Comments: 9 Pages.
In this paper, we provide a remarkable method for construction of continued fraction based on a given power series. Then we establish a new continued fraction approximation for the Lugo and Euler—Mascheroni constants. Especially, we analytically determine the coefficients of the Lugo’s asymptotic formula and all parameters of the continued fraction by Bernoulli numbers.
Category: Functions and Analysis
[64] viXra:2406.0066 [pdf] submitted on 2024-06-13 20:59:26
Authors: Ji Won Pak, Kwang Chol Kim, Kwang Song Han
Comments: 15 Pages.
Many software reliability growth models are proposed to be used in practice. However, most software reliability growth models suffer in the realistic software testing environment due to the unrealistic assumptions, such as perfect debugging, constant fault detection rate and regular changes. In fact, considering more reasonable assumptions in the reliability modeling may further improve the fitting and predictive power of software reliability growth models. It is affected by many factors, such as tester’s skill, test plans, testing tools and runtime environment. Thus, software debugging is an imperfect process. And software testing for getting fault data set is done under the assumption that user’s operation environment is the same as the testing one. However, in practice, it is exactly the same. This paper deals with a software reliability growth model which considers imperfect debugging and disagreement between operation environments. The better performance of proposed model is illustrated with fault data sets from software development project.
Category: Functions and Analysis
[63] viXra:2406.0065 [pdf] submitted on 2024-06-13 20:58:43
Authors: Jang Chol Guk, Ra Song Nam, Kim Hyon Jin, Ri Yong Gwang, Yun Jong Gyong
Comments: 8 Pages.
In Brinell hardness measurement, the diameter measurement of the indentation is an important process related to the measurement accuracy.In this paper, a special image input device was designed to input the magnified image of the indentation to increase the accuracy of hardness measurement. Then, a method for determining Brinell hardness according to the indentation diameter obtained by Hough transform on the input image was established and implemented as a device using a single-board computer Raspberry Pi 3. The experimental results demonstrate that the designed measurement device can be used to provide the high accuracy and convenient measurement.
Category: Digital Signal Processing
[62] viXra:2406.0064 [pdf] submitted on 2024-06-13 20:57:46
Authors: Kwang Su Kim, Jin Myong Kim, Pok Sol Chae, Song Gwon Ri
Comments: 6 Pages.
In this paper, a compact coplanar waveguide (CPW)-fed quintuple-band-notched antenna for ultra-wideband(UWB) applications is presented. The voltage standing wave ratio (VSWR) results show that the proposed antenna exhibits good wideband performance over an UWB frequency range from 1.4 to 10.7 GHz with VSWR less than 2, except for five stop-bands at 2.1 to 2.6 GHz, 3.3 to 3.75 GHz, 5.15 to 5.85 GHz, 7.2 to 7.6 GHz and 8.15 to 8.52 GHz for filtering the worldwide interoperability for microwave access (WiMAX) systems, wireless local area networks (WLANs), downlink of X-band satellite communication system and ITU band signals, respectively. Good radiation patterns and gain characteristics are obtained in the whole UWB frequency range except the notched frequency bands. The simulation results were compared with the measured results and good agreement is obtained. The proposed antenna provides the simple structure and good characteristics and is suitable for UWB applications.
Category: Digital Signal Processing
[61] viXra:2406.0063 [pdf] submitted on 2024-06-13 20:55:04
Authors: Victor Christianto, Florentin Smarandache
Comments: 6 Pages.
As we all know, change is the very nature of what is happening. What has not changed,"laments Hock, "is the mechanistic, hierarchical, command-and-control idea of organization that originated with Newton, Descartes, and the Industrial Age. That concept of organization," says Hock (not to mention the world view that spawned it), "is not only increasingly archaic and irrelevant, but it’s also antithetical to the human spirit... It has become a public menace." Therefore we are searching for a new and more inspiring term in lieu of conventional human resources management.
Category: Social Science
[60] viXra:2406.0062 [pdf] submitted on 2024-06-13 20:55:57
Authors: Victor Christianto, Florentin Smarandache
Comments: 8 Pages.
Interaction among light and water molecules have baffled scientists for many decades, and even centuries. In this regards, photons in the visible spectrum, where bulk water normally doesn't absorb light, can surprisingly cleave off large water clusters from the water-vapor interface, according to a recent study by Tu and Chen (2023). This discovery, termed the "photomolecular effect," opens exciting possibilities for not only revolutionizing renewable energy but also paving the way for a more integrated-ensemble approach to health management (cf. Smarandache & Christianto, 2010; Tu &Chen, 2023; Tu et al., 2024). In a sense, other than with green or red LED, we can also introduce low intensity laser to alter water molecule, as we discussed earlier in this journal (cf. Christianto, Chandra,Smarandache, 2023a; 2023b).
Category: Physics of Biology
[59] viXra:2406.0061 [pdf] submitted on 2024-06-13 14:18:54
Authors: Aswan Korula
Comments: 11 Pages.
The Michelson-Morley experiment and its resolution by the special theory of relativity form a foundational truth in modern physics. In this paper I propose an equivalent relativistic experiment involving a single-source interferometer having infinite arms. Further, we debate the possible outcomes from such an experiment and in doing so uncover a conflict between special relativity and the symmetry of nature. I demonstrate this conflictby the method of reductio ad absurdum.
Category: Relativity and Cosmology
[58] viXra:2406.0060 [pdf] replaced on 2024-06-18 21:07:41
Authors: Michael Prince
Comments: 14 Pages.
For [a long time], scientists and philosophers have grappled with the enigma of the Big Bang’s singularity, seeking to understand the primordial trigger that ignited the universe’s explosive expansion. Despite significant advances in cosmology, the origins of this singularity remain shrouded in mystery, fueling ongoing debate and research. We all learn that the Big Bang marked the birth of our observable universe from an ultra-hot, ultra-dense singularity of infinite density and zero volume. But if we follow the logic rigorously, this conventional picture turns out to be incomplete and inconsistent with some fundamental premises. For any volumetric increase or growth to occur, there must be pre-existing available space or "room" to expand into initially. This intuitive - things simply cannot begin increasing in size if there is no space to expand into. Now consider the conventional model of the Big Bang - our entire observable universe emerged from an initial state of infinite density called the "singularity" which had zero volume. Zero volume means no dimensions, no space whatsoever. Here’s the key point - if the singularity truly started with zero volume, and yet it expanded rapidly in all directions producing the vast volumes we see today, then there logically had to be some pre-existing space surrounding that singularity to allow for that expansion. Total zero volume couldn’t just grow spontaneously into something with dimension - that violates the premise. But there’s more. In our current understanding, the concepts of space and time are inseparably interlinked through Einstein’s theories. Space and time are woven together into the fabric of spacetime. So if there was pre-existing space before the ingularity, basic logic demands there must also have been some form of pre-existing time dimension as well. I know this may seem contradictory to the standard idea that space and time themselves emerged from the Big Bang event. But follow the logic clearly — if there was room for the expansion, and space implies time, then some sort of primordial space-time must have pre-dated the singularity itself. This doesn’t negate or deny the Big Bang paradigm. The initial inflation could still have propelled the singularity outwards rapidly creating the spacetime we experience today. But it shows that the Big Bang wasn’t the beginning of all existence - some earlier form of space and time had to have preceded and allowed for that expansion in the first place.
Category: Relativity and Cosmology
[57] viXra:2406.0059 [pdf] submitted on 2024-06-13 20:50:09
Authors: Andreas Ball
Comments: 22 Pages. (Correction made by viXra Admin to conform with the requirements of viXra.org)
In this report the author tries to handle with four themes. Referring the first topic [Part 2] derivations of ancient approximations for the Circle Figure π are presented using the figures of the two- and three-dimensioninal case for the straight and the round. [Part 3] deals with possible connections between the Circle Figure π and the Golden Section Ф using modified terms as presented at the first topic.[In Part 4] a complete solution of the puzzle around the drawing Vitruvian Man of Leonardo da Vinci is presented, which is mostly based on the informations given by two german authors. [In Part 5] a system is described, which is based on the geometrical system of the drawing Vitruvian Man of Leonardo da Vinci and by which the attempt of a connection between the Circle Figure π and the Golden Section Ф is done.
Category: General Mathematics
[56] viXra:2406.0058 [pdf] submitted on 2024-06-13 20:53:17
Authors: Clark M. Thomas
Comments: 9 Pages.
Because we live on a bejeweled planet, humans arevery interested in all rocky planets. Planets comein many sizes and varieties. There may be moreplanets in the Milky Way than stars. So far, onlyour Earth has been shown to host philosophicallyadvanced life. I was one of the first to write aboutlife on rogue planets without local suns. This newessay updates planets without stars by includingmultiple-body orbits, and how planets could formand mutually orbit without any dust star of origin.
Category: Astrophysics
[55] viXra:2406.0057 [pdf] submitted on 2024-06-13 23:59:31
Authors: Manuel Abarca Hernandez
Comments: 30 Pages. (Note by viXra Admin: Further repetition will not be accepted)
This paper develops a theory of DM in the current LCDM framework, whose main hypothesis is that DM is generated by the own gravitational field, according an unknown quantum gravitational phenomenon. The hypothesis of DM by quantum gravitation, DMbQG hereafter, has two main consequences: the first one is that the law of DM generation has to be the same, in the halo region, for all the galaxies and the second one is that the haloes are unbounded, so the total DM goes up without limit. The first one consequence is backed by the fact that M31 and MW has a fitted function with the same power exponent. The theory firstly is developed with M31 rotation curve data up to the chapter 10. The chapter 11 is dedicated to apply the theory to MW. The results of its direct mass are tested successfully using the data published at different radius. In the chapter 12 is calculated the direct mass for the L.G. The DMbQG theory is the only one able to calculate the total mass at 770 kpc that match with dynamical measures of mass. In the chapter 13 is shown a method to estimate the Direct mass formula for a cluster of galaxies, using its virial mass and radius. By this method it is estimated the parameter a2 of the L.G. , the Virgo and Coma. Chapter 14 shows how DE is able to counterbalance the DM, as the Direct mass grows up with the square root of radius whereas the DE grows up with the cubic power. This theory aims to be a powerful method to study DM in the halo region of galaxies and cluster of galaxies and conversely the measures in galaxies and clusters offer the possibility to validate the theory.
Category: Astrophysics
[54] viXra:2406.0056 [pdf] submitted on 2024-06-11 21:32:40
Authors: Philip Naveen
Comments: 42 Pages.
This manuscript is merely a formal documentation of the purpose and details surrounding the online convex optimization toolbox (OCOBox) for MATLAB. The purpose of this toolbox is to provide a collection of algorithms that work under stochastic situations where traditional algorithmic theory does not fare so well. The toolbox encompasses a wide range of methods including Bayesian persuasion, bandit optimization, Blackwell approachability, boosting, game theory, projection-free algorithms, and regularization. In the future, we plan to extend OCOBox to interactive machine learning algorithms and develop a more robust GUI.
Category: Artificial Intelligence
[53] viXra:2406.0055 [pdf] replaced on 2025-04-08 18:36:58
Authors: L. Martino, F. Llorente
Comments: 29 Pages.
Improper priors are not allowed for the computation of the Bayesian evidence Z = p(y) (a.k.a., marginal likelihood), since in this case Z is not completely specified due to an arbitrary constant involved in the computation. However, in this work, we remark that they can be employed in a specific type of model selection problem: when we have several (possibly infinite) models belonging to the same parametric family (i.e., for tuning parameters of a parametric model). However, the quantities involved in this type of selection cannot be considered as Bayesian evidences: we suggest to use the name "fake evidences" (or "areas under the likelihood" in the case of uniform improper priors). We also show that, in this model selection scenario, using a use prior and increasing its scale parameter asymptotically to infinity, we cannot recover the value of the area under the likelihood, obtained with a uniform improper prior. We first discuss it from a general point of view. Then we provide, as an applicative example, all the details for Bayesian regression models with nonlinear bases, considering two cases: the use of a uniform improper prior and the use of a Gaussian prior, respectively. A numerical experiment is also provided confirming and checking all the previous statements.
Category: Statistics
[52] viXra:2406.0054 [pdf] submitted on 2024-06-12 05:39:22
Authors: Seiji Tomita
Comments: 3 Pages.
In this paper, we prove that there are infinitely many integers that can be expressed as the sum of four cubes of polynomials.
Category: Number Theory
[51] viXra:2406.0053 [pdf] replaced on 2024-06-23 00:29:25
Authors: Dmitriy S. Tipikin
Comments: 4 Pages. More accurate formula was used for evaluation of angle of scattering.
James Webb Space Telescope continues to make discoveries and some of them seemingly contradict to all known astrophysics data. For example the supernova type 1a (standard candle, well researched object) was recently recorded [1] but the overall image size of that supernova at a distance of z=2.9 corresponds to around 5000 light years at this distance and angular size is around 10 times resolution of the telescope and by far larger than any physics possibly allows. This size is a size of small galaxy and by no means may be allowed for supernova (especially standard candle, which is well researched and all sizes are predicted long ago). The only reason for such a blurred big image is the scattering of light itself — the further the object observed the larger that scattering [2] and the evaluation of the size of the image (angle of scattering) using formulas from [2] seems to confirm once more the tired light theory.
Category: Astrophysics
[50] viXra:2406.0052 [pdf] submitted on 2024-06-12 20:44:12
Authors: Vladimir S. Netchitailo
Comments: 59 Pages.
Hypersphere World-Universe Model is consistent with all Concepts of the World. The Model successfully describes primary cosmological parameters and their relationships. WUM allows for precise calculation of values that were only measured experimentally earlier and makes verifiable predictions. The remarkable agreement of calculated values with the observational data gives us considerable confidence in the Model. Great experimental results and observations achieved by Astronomy in last decades should be analyzed through the prism of WUM. Considering the JWST discoveries, successes of WUM, and 86 years of Dirac’s proposals, it is high time to make a Paradigm Shift for Cosmology and Classical Physics.
Category: Relativity and Cosmology
[49] viXra:2406.0051 [pdf] submitted on 2024-06-10 23:10:04
Authors: Morteza Mahvelati
Comments: 36 Pages.
In classical physics, linear and angular motion as well as linear and angular momentum have long been defined. In this paper it becomes apparent through analysis that there is much need for the presence and denotation for a new type of motion. As such, centrial motion is introduced and described as another form of motion not previously presented. Furthermore, a new form of momentum called centrial momentum is defined and elaborated. As a result, the motion of complex bodies can be analyzed and studied with much more simplicity and ease than previously done via classical physics. Along with the discussion of centrial motion and momentum, the concepts of linear motion based on the motion of momentum is also studied and analyzed and the law of motion of momentum is defined. Additionally, complex scenarios are introduced where the discussions assist in the much simpler understanding of the classical scenarios of the motions presented. It becomes readily apparent that the use of centrial motion equations and relationships derived are the best suited for the purposes of the study of these types of motions.In addition, in this paper, motion scenarios that cannot be explained by classical physics are discussed and adequately explained by presenting new concepts. Through deeper analyses, it is found that momentum is not conserved. However, the kinetic energy of an isolated system, if not transformed to other forms of energy, remains conserved.
Category: Classical Physics
[48] viXra:2406.0050 [pdf] replaced on 2024-06-27 20:42:09
Authors: Chun-Hu Cui, He-Song Cui
Comments: 32 Pages.
In DeFi (Decentralized Finance) applications, and in dApps (Decentralized Application) generally, it is common to periodically pay interest to users as an incentive, or periodically collect a penalty from them as a deterrent. If we view the penalty as a negative reward, both the interest and penalty problemscome down to the problem of distributing rewards. Reward distribution is quite accomplishable in financial management where general computers are used, but on a blockchain, where computational resources are inherently expensive and the amount of computation per transaction is absolutely limited with a predefined, uniform quota, not only do the system administrators have to pay heavy gas fees if they handle rewards of many users one by one, but the transaction may also be terminated on the way. The computational quota makes it impossible to guarantee processing an unknown number of users. We propose novel algorithms that solve Simple Interest, Simple Burn, Compound Interest, and Compound Burn tasks, which are typical components of DeFi applications. If we put numerical errors aside, these algorithms realize accurate distribution of rewards to an unknown number of users with no approximation, while adhering to the computational quota per transaction. For those who might already be using similar algorithms, we prove the algorithms rigorously so that they can be transparently presented to users. We also introduce reusable concepts and notations in decentralized reasoning, and demonstrate how they can be efficiently used. We demonstrate, through simulated tests spanning over 128 simulated years, that the numerical errors do not grow to a dangerous level.
Category: Data Structures and Algorithms
[47] viXra:2406.0049 [pdf] submitted on 2024-06-11 20:02:05
Authors: Fernando Salmon Iza
Comments: 8 Pages.
The growing interest in dark energy and dark matter has made studies on the energy density in the universe a very current topic. Furthermore, new cosmological measurements are calling into question the validity of the ΛCDM model, and it is necessary to review it in depth. To solve these new challenges, Professor Fulvio Melia has developed a linear expansion universe model, the Rh=ct universe, which is giving very good results in relation to the new cosmological measurements. In this report we have developed, within this model of the universe, an equation that allows us to calculate the value of the energy density as a function of the age of the universe. The result in reference to the current experimental value of the energy density obtained by Mission Planck coincides with the value obtained by our equation 0,97.10-26 Kg/m3. For this reason, we believe that our equation can be useful when determining energy densities of the universe at earlier and later times. With this wish we present our work.
Category: Astrophysics
[46] viXra:2406.0048 [pdf] submitted on 2024-06-11 20:05:53
Authors: Fernando Salmon Iza
Comments: 5 Pages.
The standard cosmological model ΛCDM cannot respond to some important new results of modern cosmology. Challenges arise such as Microwave Background Uniformity, the Hubble Stress, the El Gordo collision or impossible galaxies (z > 10) that the standard cosmological model does not solve. On the other hand, other models are proposed as alternatives.Professor Fulvio Meliá's linear expansion universe, Rh=ct, solves these challenges where the standard model fails. This model is based on the relationship Rh = ct where Rh is the gravitational horizon, which coincides with the Hubble radius, "t" is the age of the universe and "c" is the speed of light. Although the model is already theoretically based [3], in this work we have obtained the constraint Rh = ct as a consequence of the spatially flat universe.
Category: Astrophysics
[45] viXra:2406.0047 [pdf] submitted on 2024-06-11 19:26:15
Authors: Edgar Valdebenito
Comments: 3 Pages.
In this note we solve an equation with radicals and give two series for Pi.
Category: Functions and Analysis
[44] viXra:2406.0046 [pdf] submitted on 2024-06-10 20:05:12
Authors: Junho Eom
Comments: 16 Pages. 4 figures
Primes less a given number n (n >= 2) determines new primes within a limited area increased with a square (n2) or decreased with a square root (sqrt()). As the area is extended, the number of primes is also changed and controlled within an extended area boundary or number boundary, n to n2 or n to sqrt(). The structure of a number boundary is applied to the Euler product and helps to characterize the Euler’s prime boundary between n and (n2 - 1). The characterized Euler product is used to characterize the non-trivial zeroes derived in an elementary way of Riemann zeta function. Then, the characterized Euler product and non-trivial zeroes are discussed regarding their potential number boundaries. Overall, it is concluded that the characteristic of a number boundary can represent the characteristic of primes, especially the number of primes. As the number boundary is characterized by the increased or decreased exponent while the base or given number n is fixed, it is concluded that the pattern of exponent in the number boundary would be a key to understanding the pattern of primes.
Category: Data Structures and Algorithms
[43] viXra:2406.0045 [pdf] submitted on 2024-06-10 16:52:07
Authors: Taha Sochi
Comments: 75 Pages.
We present in this article a general approach (in the form of recommendations and guidelines) for tackling Diophantine equation problems (whether single equations or systems of simultaneous equations). The article should be useful in particular to young "mathematicians" dealing mostly with Diophantine equations at elementary level of number theory (noting that familiarity with elementary number theory is generally required).
Category: Number Theory
[42] viXra:2406.0044 [pdf] submitted on 2024-06-10 20:41:49
Authors: Christophe Duplan
Comments: 18 Pages.
This study investigates the implications of Planck scales on the causality of low-mass particles at very high energies. Utilizing a fractal approach to space-time, we propose novel dynamics for the fabric of space-time and its interaction with special relativity. Our findings indicate that physical values converge at the Planck scale, revealing potential implications for quantum gravity and unified theories. Specifically, the study explores the self-similar properties of fundamental physical constants, the redefinition of vacuum permittivity, and the anomaly in the light cone for low-mass particles. We hypothesize a secondary fermionic causality limit, distinct from the speed of light, which could account for these anomalies. Furthermore, the results suggest that the properties of the vacuum at the Planck scale could offer new explanations for cosmic inflation, the Big Bang singularity, and the nature of dark matter and dark energy. By redefining the Planck length as a compact fractal object, this research opens promising avenues for future investigations into quantum gravity and the unification of fundamental forces.
Category: High Energy Particle Physics
[41] viXra:2406.0043 [pdf] submitted on 2024-06-10 20:51:17
Authors: J. W. Vegt
Comments: 46 Pages.
Nuclear fusion represents a frontier melding the realms of material science, typified by fusion fuels like Deuterium, and energy science, characterized by microwave heating methodologies. Current theoretical physics paradigms fall short in adequately describing the complex interactions required to stabilize nuclear fusion, particularly within confinement devices such as Tokamaks. Addressing this limitation necessitates a novel theory that accurately encompasses the interactions between electro-magnetic-gravitational force densities (expressed in N/m³) and their mechanical analogues, articulated through the Navier-Stokes equation for compressible nuclear plasmas.This pioneering theoretical framework offers an all-encompassing perspective on electro-magnetic-gravitational-acceleration force density interactions across both astronomical and subatomic scales. It spans phenomena as diverse as Gravitational RedShift, Black Holes, and the discrete energy levels of atomic light absorption and emission. Uniquely, this theory integrates electrodynamics and plasma dynamics into a single cohesive model. Traditionally overlooked, gravitational (acceleration) forces resulting from rotational and linear accelerations are revealed here as pivotal for achieving stable nuclear fusion.Unlike General Relativity, this new theory is grounded on the combined divergence of the "Stress-Energy Tensor" and the "Gravitational-Acceleration" Tensor. It elucidates "Gravitational-Acceleration-Electromagnetic" interactions, providing mathematical tensor solutions for Black Holes or Gravitational Electromagnetic Confinements. The "Electromagnetic Energy Gradient" generates a second-order "Lorentz Transformation," translating into the Gravitational Field of Black Holes, which dictates force density interactions between light confinement and the "Gravitational-Acceleration" Field.In juxtaposition to Einstein's introduction of the "Einstein Gravitational Constant" within the four-dimensional Energy-Stress Tensor, our theory capitalizes on the additive properties of the Electromagnetic Tensor and the "Gravitational-Acceleration" Tensor. This revised vantage point unveils the concept of "CURL" within gravitational fields surrounding Black Holes, influencing Gravitational Lensing—phenomena unaccounted for by General Relativity.Additionally, the theory identifies "Electromagnetic-Gravitational Interaction," "Magnetic-Gravitational Interaction," and "Electric-Gravitational Interaction." It proposes that interactions are exclusive to field interactions rather than particle-field interactions as traditionally conceived: electric fields engage with other electric fields, magnetic fields with other magnetic fields, and gravitational fields with other gravitational fields.This advanced theoretical approach provides precise mathematical descriptions of Black Holes, as initially proposed by John Archibald Wheeler in 1955. The theoretical solutions for Black Holes are integral to the Dirac equation's tensor form in relativistic quantum mechanics. Assuming a constant speed of light (c) and Planck’s constant within a Black Hole, the radius of a Black Hole with the energy of a proton approximates 1% of a hydrogen atom radius.Empirical substantiation is derived from experiments involving two Galileo satellites and a Ground Station, where Gravitational RedShift was measured using a stable MASER frequency. The discrepancy between General Relativity and the New Theory's predictions for Gravitational RedShift within Earth's gravitational field is less than 10^(-16). Observational data since W.S. Adams' 1925 measurement of the gravitational redshift in the spectral lines from the White Dwarf companion to Sirius consistently aligns with both theories within negligible margins.Theories seeking to unify Quantum Physics with General Relativity, such as "String Theory," suggest temporal variability in natural constants. However, precise observations from NASA’s Messenger mission have significantly constrained potential variations in the gravitational constant (G). A distinguishing feature of the New Theory is its prediction of a temporally constant (G), reinforcing the unification of General Relativity and Quantum Physics.
Category: Nuclear and Atomic Physics
[40] viXra:2406.0042 [pdf] submitted on 2024-06-09 16:42:08
Authors: Kuo Tso Chen
Comments: 7 Pages. 1 figure
This paper tackles a physics problem persisting for over 150 years—the unsolved issue tied to 'Maxwell's demon' since 1871. It offers a potential solution to the longstanding problem of the second law of thermodynamics. The second law of thermodynamics states that the conversion rate of thermal energy into other forms of directional energy, such as kinetic, potential, or electrical energy, is constrained by the temperature difference divided by the absolute temperature. Essentially, without a temperature difference, thermal energy cannot be converted into other forms of directional energy. In this paper, we identify and theoretically demonstrate a scenario that surpasses this limitation. Specifically, we show that under certain conditions, thermal energy can be continuously converted into electrical energy. The proposed method involves placing a pair of uniformly rotating polarizers between two black bodies. When radiation from the black bodies perpendicularly strikes the polarizers, the conservation of angular momentum ensures that, in the absence of friction, the rotation speed of the polarizers does not decrease. With an appropriate configuration, this setup can cause asymmetric radiation exchange between the black bodies, thereby generating a temperature difference automatically. This temperature difference can then be harnessed to convert thermal energy into electrical energy. Thus, it is possible to naturally generate a temperature difference from an initial state of thermal equilibrium and convert it into electrical energy without any loss of angular momentum or energy, thereby transcending the limitations of the second law of thermodynamics. This breakthrough suggests the potential for a sustainable and environmentally friendly energy source.
Category: Thermodynamics and Energy
[39] viXra:2406.0041 [pdf] replaced on 2024-06-20 06:34:48
Authors: Rami Rom
Comments: 31 Pages.
According to the Standard Model (SM) the quantum vacuum is not empty. However, General Relativity (GR) and the SM do not describe the vacuum structure. We propose a valence quark-based theory of the quantum vacuum structure based on a pion tetrahedron fabric that fills space with varying density. We assume that the valence quarks and antiquarks, u ,d ,u ̃ ,d ̃ that form the vacuum pion tetrahedron fabric. Motion of particles made of quarks on the vacuum pion tetrahedron fabric is performed by quark exchange reactions by tunneling through a double well potential and motion of massless particles are performed by internal degrees of freedom motion of the pion tetrahedron fabric. Active Galactic Nuclei (AGN) systems may be Carnot engines working between cold black hole (BH) and hot accretion disc reservoirs. The AGN Carnot cycle may create and emit to space pion tetrahedrons, protons, electrons and photons in pulses by the AGN jets. An alternative explanation for the observed expansion of the Universe may be the creation and emission of pion tetrahedrons to space by the AGN jets that expands the Universe quantum vacuum pion tetrahedron fabric from inside like a balloon. Shakura and Sunyaev thin accretion disc analytic expressions are used for calculating the pion tetrahedron mass and the number of pion tetrahedrons emitted by an AGN Carnot engine in a cycle.
Category: High Energy Particle Physics
[38] viXra:2406.0040 [pdf] submitted on 2024-06-09 16:39:48
Authors: Andreas Martin
Comments: 18 Pages.
This publication contains a mathematical approach for a reinterpretation of the calculation of the magnetic moment for the Einstein de Haas experiment under the assumption of a magnetic field density from the elaboration "The reinterpretation of the 'Maxwell equations'[1]". The basis for this is Faraday's unipolar induction, which has proven itself in practice in combination with the calculation rules of vector analysis and differential calculus. The newly calculated "Maxwell equations" offer a generally valid calculation approach for the Einstein de Haas experiment and its problem that the difference between measurement and calculation is a factor of 2. This connection is established mathematically in this work.It is shown that the magnetic moment can be derived mathematically by using one of the newly calculated basic equations of electrodynamics from the elaboration "The reinterpretation of the 'Maxwell equations'[1]". The gradient of the magnetic flux density grad u20d7B and its mathematical consequences regarding the divergence of the magnetic flux density div u20d7B will play an important role here in this essay. By formulating that the trace of the gradient ofthe magnetic flux density (Sp)grad Bu20d7 corresponds to the divergence of the magnetic flux density div u20d7B a direct connection of the magnetic flux density field itself with the field density of the magnetic flux density is revealed. It also explains and corrects the difference between measurement and calculation in the Einstein de Haas experiment. This is successful because: In this experiment, alternating current and alternating voltage were used to carry outthe experiment [2]. Due to this fact, the "Maxwell equations" can be used for calculation and therefore also their new formulation from the article "The reinterpretation of the 'Maxwell equations'[1]"
Category: Mathematical Physics
[37] viXra:2406.0039 [pdf] submitted on 2024-06-09 16:35:24
Authors: Aras Dargazany
Comments: 9 Pages.
This theory is an attempt to unify general relativity and quantum mechanics by integrating:Einstein Field Equation for Gravitational Wave in General Relativity; Schrödinger Field Equation for Quantum Wave in Quantum Mechanics; Maxwell Field Equation for Photon Wave in Electromagnetism;Hawking Field Equation for Radiation Wave in Black Holes; and Heisenberg Uncertainty Principle for Minimal Action (or Entropy). This unification leads to the potential prediction of Graviton (mass, charge, and spin).
Category: Quantum Gravity and String Theory
[36] viXra:2406.0038 [pdf] replaced on 2024-06-16 01:02:40
Authors: Ervin Goldfain
Comments: 21 Pages.
It is known that both classical and Quantum Field Theory (QFT) are built on the fundamental principle of stationary action. The goal of this introductory work is to analyze the breakdown of stationary action under nonadiabatic conditions. These conditions are presumed to develop far above the Standard Model scale and favor the onset of Hamiltonian chaos and fractal spacetime. The nearly universal transition to nonadiabatic behavior is illustrated using a handful of representative examples. If true, these findings are likely to have far-reaching implications for phenomena unfolding beyond the Standard Model scale and in early Universe cosmology.
Category: Mathematical Physics
[35] viXra:2406.0037 [pdf] submitted on 2024-06-08 04:51:00
Authors: Fuyuan Xiao
Comments: 3 Pages.
In this paper, we propose a quantum evidential reasoning rule in the framework of generalized quantum evidence theory.
Category: Artificial Intelligence
[34] viXra:2406.0036 [pdf] submitted on 2024-06-09 03:31:05
Authors: Biruk Alemayehu Petros
Comments: 11 Pages. This is continuation of published result.
This paper presents an analytic solution to the Navier-Stokes equations for incompressible fluid flow with a periodic initial velocity vectorfield. Leveraging Fourier series representations, the velocity fields are expressed as expansions, accounting for their temporal evolution. Thesolution’s existence and smoothness are verified by demonstrating its consistency with the Navier-Stokes equations, including the incompressibilitycondition and pressure compatibility. The proposed solution contributes to understanding fluid dynamics and offers insights into the millennium prize problem related to the Navier-Stokes equations. This work lays the groundwork for further investigations into fluid flow behavior under various conditions and geometries, combininganalytical and numerical approaches to advance our understanding of fluid dynamics.
Category: Functions and Analysis
[33] viXra:2406.0035 [pdf] submitted on 2024-06-07 01:17:32
Authors: Junjie Huang, Fuyuan Xiao
Comments: 1 Page.
In this paper, to extend the triditional evidential reasoning (ER) method to complex plane, a novelcomplex evidential reasoning (CER) method is defined in the framework of complex evidencetheory (CET).
Category: Artificial Intelligence
[32] viXra:2406.0034 [pdf] submitted on 2024-06-07 05:47:54
Authors: Stephen H. Jarvis
Comments: 28 Pages.
In moving forward with the scaling and surveying keys of paper 60 of Temporal Mechanics, an ellipsoid structure joining the proposed time-equation with the proposed space-equation as the ellipsoid timespace field mechanism is revealed. There, in direct reference to the Collatz conjecture, a solution to the three-body problem is proposed for both the sub-quantum and quantum particle levels, revealing the foundational time and space code of empty space directly comparable to current ideas and values for zero-point energy and the zero-point field.
Category: Mathematical Physics
[31] viXra:2406.0033 [pdf] replaced on 2024-06-13 12:34:02
Authors: Dmitri Martila
Comments: 4 Pages.
Explanations why the real part of Zeta function zeroes is always being seen on the 1/2 line.
Category: Condensed Matter
[30] viXra:2406.0032 [pdf] submitted on 2024-06-07 16:46:19
Authors: Kuan Peng
Comments: 21 Pages.
While a 3D complex number would be useful, it does not exist. Recently, I have constructed the N-complex number, which has demonstrated high efficiency in computations involving high-dimensional geometry. The N-complex number provides arithmetic operations and polar coordinates for N-dimensional spaces, akin to the classic complex number. In this paper, we will explain how these systems work and present studies on 4D Klein bottles and hyperspheres to illustrate the advantages of these systems.
Category: Geometry
[29] viXra:2406.0031 [pdf] submitted on 2024-06-08 03:43:43
Authors: Benjamin Chung
Comments: 52 Pages.
In an effort to better understand the mechanisms underlying finance and economics, this investigation simplifies an economy to its most basic form —an economic space of assets consisting of atomic equity and debt. By applying the concept of diffusion to debt, the corresponding behaviour of equity was investigated. The findings reveal that debt and equity cannot both diffuse or concentrate simultaneously at the macroscopic scale of a market-driven economy, and that equity only concentrates in an economy experiencing robust economic growth. Additionally, if diffusion is a required assumption for homogenous mixing, and debt transforms into equity with some probability, then an economic system can be modelled as a system of competing viral infections within a susceptible population, or market. It is shown how parameters of infection correspond to measures of sales and marketing, suggesting that the product/business lifecycle curve is very likely an infection curve. This Economic Infection model provides a unified framework that can incorporate metrics used in sales and marketing —such as convserion rate, churn rate, engagement rate, etc— to forecast revenue and market share growth for market competitors whose values can be estimated. Also, a preliminary decomposition of Price Elasticity of Demand within this economic infection framework reveals multiple contributing elasticities (including the Price Elasticity of Supply) which producers and retailers can manipulate to shift PED more positive or negative. These decomposed elasticities align with several known pricing strategies aimed at driving sales quantities, with one particular elasticity identified as a possible driver of demand-pull inflation.
Category: Economics and Finance
[28] viXra:2406.0030 [pdf] replaced on 2024-06-14 21:25:31
Authors: Bassera Hamid
Comments: 1 Page. Sent to American Mathematical Society in June 05 2024
In this article I try to make my modest contribution to the proof of Goldbach’s conjecture and I propose to simply go through its negation.
Category: Number Theory
[27] viXra:2406.0029 [pdf] submitted on 2024-06-08 00:32:44
Authors: Yuan Xu, Kai-Ting Fan
Comments: 21 Pages.
Plant-microbe interactions lie at the heart of ecosystem dynamics and agricultural productivity. Metabolomics has revolutionized our understanding of these interactions, providing an unprecedented glimpse into the intricate chemical dialogues that shape their outcomes. This review explores the kaleidoscopic array of metabolomics techniques employed to investigate plant-microbe interactions, from the cutting-edge realms of mass spectrometry and nuclear magnetic resonance spectroscopy to the visually stunning world of imaging. We also delve into the application of state-of-the-art bioinformatics tools, databases, and the rapidly evolving fields of artificial intelligence and machine learning in metabolomics data analysis. By seamlessly weaving together metabolomics with other omics approaches, such as transcriptomics, proteomics, and metagenomics, we can paint a comprehensive portrait of the molecular tapestry that underlies plant-microbe interactions. Moreover, we shine a spotlight on the crucial complementary role of fluxomics in illuminating the dynamic ebb and flow of metabolic networks. Despite the formidable challenges inherent in data analysis, integration, and interpretation, metabolomics has catalyzed a paradigm shift in our understanding of the multifaceted roles metabolites play in sculpting plant-microbe interactions. As metabolomics technologies continue to evolve and synergize with other omics approaches, we find ourselves on the precipice of groundbreaking discoveries that will unravel these complex interactions and ultimately usher in a new era of sustainable agriculture and biotechnology.
Category: Biochemistry
[26] viXra:2406.0028 [pdf] replaced on 2024-09-19 03:42:04
Authors: Deokjin Kim
Comments: 10 Pages.
In previous studies, from our originative method for the integration of four fundamental forces, dark energy ratio was calculated as 72.916%. In this study, dark energy ratio was calculated as 72.9138% and 68.5741% by our originative idea and formula. Additionally, from gravitational constant G 6.67430E-11 m3/kg1s2, cosmological constant was calculated as 1.106169E-52 /m2, age of universe as 13.784 BY, and Hubble parameter as 67.833 km/s/Mpc and 72.777 km/s/Mpc. Simultaneously with the above results, the radiation density of 9.117E-5 (= CMBγ 5.408E-5 + CNBν 3.708E-5) was calculated, and the value of 13.784E9Y x 5.408E-5 / 2 is 372,700 years. The following very important results were obtained from this study. Dark energy ratio is the constant regardless of time flow, and cosmological constant is parameter of time flow such as Hubble parameter.
Category: High Energy Particle Physics
[25] viXra:2406.0027 [pdf] submitted on 2024-06-06 00:49:05
Authors: Teo Banica
Comments: 400 Pages.
This is an introduction to graph theory, from a geometric viewpoint. A finite graph $X$ is described by its adjacency matrix $din M_N(0,1)$, which can be thought of as a kind of discrete Laplacian, and we first discuss the basics of graph theory, by using $d$ and linear algebra tools. Then we discuss the computation of the classical and quantum symmetry groups $G(X)subset G^+(X)$, which must leave invariant the eigenspaces of $d$. Finally, we discuss similar questions for the quantum graphs, with these being again described by certain matrices $din M_N(mathbb C)$, but in a more twisted way.
Category: Combinatorics and Graph Theory
[24] viXra:2406.0026 [pdf] submitted on 2024-06-06 20:46:41
Authors: X. N. Ismatullaev
Comments: 3 Pages. In Russian
In this work, the Lagrangian of a two-component Bose-Einstein condensate is derived in terms of the Gaussian Ansatz parameters.
Category: Condensed Matter
[23] viXra:2406.0025 [pdf] submitted on 2024-06-06 05:28:47
Authors: Seiji Tomita
Comments: Pages.
In this paper, we proved that there are infinitely many integers n such that a+b+c=1/a+1/b+1/c=n has infinitely many rational solutions.
Category: Number Theory
[22] viXra:2406.0024 [pdf] submitted on 2024-06-06 05:40:01
Authors: Abhishek Kumar
Comments: 4 Pages. Keywords : Mathematics in Machine Learning, Statistics, Calculus, Linear Algebra, Probability
Machine learning (ML) is a prominent branch of artificial intelligence (AI) that has drastically transformed various fields by providing sophisticated tools for data analysis and prediction. This paper reviews the pivotal role of mathematics in the development and refinement of machine learning algorithms. The core objective is to illustrate how mathematical principles underpin the processes of training and optimizing ML models, ensuring their effectiveness in recognizing patterns and making autonomous decisions from data.
Category: Algebra
[21] viXra:2406.0023 [pdf] submitted on 2024-06-06 21:03:37
Authors: Xiaochun Mei
Comments: 15 Pages. In Chinese (Converted to pdf and abstract shortened by viXra admin - Please only submist article in pdf format)
There are some very basic problems with the existing definition of momentum operators in quantum mechanics and need to be further improved. For example, the kinetic energy operator and the momentum operator are used to calculate the kinetic energy of microscopic particles, and the results obtained are generally different. This article redefines the momentum operator of quantum mechanics and proposes the concept of a universal momentum operator to solve the above problems well. The universal momentum operator only changes the direction of the particle's momentum, but does not change the particle's kinetic energy and energy. Since the calculated values u200bu200bof these two momentum operators differ in the direction of particle motion, additional momentum and additional angular momentum result. This article gives the relationship between the additional angular momentum and the spin of microscopic particles, and explains the nature of the spin of microscopic particles. It is proved that spin is related to the part of angular momentum that cannot be described by the angular momentum operator of quantum mechanics, and is consistent with the inference of the Dirac equation. The description using the Schrödinger equation is unified with the description of the Dirac equation. The so-called spin of an electron is not the rotation around itself, but the rotation around the outermost orbit of the atomic nucleus. The spin operator of microscopic particles is a quantum number, which can be used to make calculations conveniently. On this basis, it is proved that the real reason why Bell's inequality is not supported by experiments is that quantum mechanics has a wrong understanding of the concept of spin projection, and the formula used in deriving Bell's inequality does not hold.
Category: High Energy Particle Physics
[20] viXra:2406.0021 [pdf] submitted on 2024-06-06 20:30:06
Authors: Raul A. Félix de Sousa
Comments: 39 Pages.
A model for biopoesis is proposed where a complex, dynamic ecosphere, characterised by steep redox potentials, precedes and conditions the gradual formation of organismal life. A flow of electrons across the Archean hydrosphere, proceeding from the reducing constituents of the lithosphere and pumped by the photolytic production of oxygen in the Earth's atmosphere is the central feature of this protobiological environment. The available range of electrochemical potentials allows for the geochemical cycling of biogenic elements. In the case of carbon, carboxylation and decarboxylation reactions are essential steps, as in today's organisms. Geochemical evidence for high levels of carbon dioxide in the Earth's early atmosphere and the biological relevance of carboxylations are the basis for a hypercarbonic conception of the primitive metabolic pathways. Conversion of prochiral chemical species into chiral molecules, inherent to hypercarbonic transformations, suggests a mechanistic method for the generation of homochirality through propagation. The solubility of oxygen in lipid materials points to an aerobic course for the evolution of cellularity.
Category: Biochemistry
[19] viXra:2406.0020 [pdf] submitted on 2024-06-05 19:56:21
Authors: Budee U. Zaman
Comments: 9 Pages.
This paper presents a new proof of the Goldbach conjecture, which is a well-known problem originating from number theory that was proposedby Christian Goldbach back in 1742. Our way gives a simple but deep understanding of the even integers can be written as the sum of two prime numbers Through examining fully we show that every other even integer larger than two will essentially represent itself in form adding up two prime numbers. The revelation of a straightforward and elegant line to this enduring conjecture comes from the use of basic number theory concepts such as; by going a step further and coming up with creative strategies. There is more evidence and sound payments she makes for her assertion as we continue.The centuries-old mathematical puzzle has been solved paving way for the exploration of new possibilities in number theory and we are grateful for the perspective and the persistence accorded us by God, which enabled us to reach this milestone.
Category: Number Theory
[18] viXra:2406.0019 [pdf] submitted on 2024-06-05 19:53:11
Authors: Tianyu Yuan
Comments: 15 Pages.
This paper aims to leverage the advancements in General Computer Control (GCC) to improve the efficiency and effectiveness of risk management operations in financial institutions. Specifically, we introduce an LLM-based Robotic Process Automation (RPA) framework designed to enhance front-line employee work, adapt to the specific needs of financial institutions, and automate tasks requiring minimal cognitive effort. To demonstrate the effectiveness of our proposed framework, stress testing, a common task for risk management department, is used as a case study. The results show that the RPA system can improve efficiency, reduce costs, and minimize errors, all without significantly altering the existing workflow. Moreover, to address customer information security and prompts copyright protection issues, a storage method that separates the server from the client is used. Finally, empirical evidence implies that even models with weaker capabilities can achieve the desired work objectives when guided by detailed prompts.
Category: Economics and Finance
[17] viXra:2406.0018 [pdf] replaced on 2025-12-03 10:10:14
Authors: Andreas Ball
Comments: 18 Pages.
In this report Approximations of selected Physical Constants are presented, which results mostly are far within the tolerance of the Constants - that is the reason of the attribute exact in the title - and which often show a similar form with repeating figures. Besides the Quotient of the Golden Ratio and the Circle Figure π especially the figures 144 and 666 have to be named referring the used figures at these approximations. Because of their interplay the author calls them the Versatile Four.The author firstly became aware of the figure 666 by simple mathematical relations with input data of earth, moon and sun, which is described in chapter 2. Gradually the author noticed that the figure 666 cooperates well with the figure 144.The assumption, that the figures 144 and 666 in connection with the Circle Figure and the Golden Ratio are suitable to describe also Physical Constants, lead to the approximations, which can be read in the extensive chapter 3. The figures 144 and 666 are often used performing Fine-Tuning Terms for example with the form [1 ± x/(144*666)], which further are used as the basis of selected exponents. The selected quantities x and the selected exponents naturally have to be conclusive figures or terms.
Category: Mathematical Physics
[16] viXra:2406.0017 [pdf] submitted on 2024-06-04 01:48:39
Authors: Shao-Dan Lee
Comments: 4 Pages.
We have constructed an ideal with respect to a subset of binary operations. In this paper, we construct a prime ideal with respect to a nonempty subset of binary operations in an algebra. Let P and Q be two prime ideals with respect to Φ and Ψ, respectively. Then we have that P ∪ Q is a prime ideal if some conditions hold.
Category: Algebra
[15] viXra:2406.0016 [pdf] submitted on 2024-06-04 13:30:29
Authors: Ricardo Gil
Comments: 3 Pages.
The distribution and density of these zeros affect the error term in the Prime Number Theorem. If the Riemann Hypothesis holds, it implies a tighter error bound in the Prime Number Theorem.
Category: Number Theory
[14] viXra:2406.0015 [pdf] submitted on 2024-06-04 14:53:09
Authors: Agustín A. Tobla
Comments: 7 Pages.
This paper presents a reformulation of special relativity, whose kinematic and dynamic magnitudes are invariant under transformations between inertial and non-inertial reference frames, which can be applied in massive and non-massive particles, and where the relationship between net force and special acceleration is as in Newton’s second law. Additionally, new universal forces are proposed.
Category: Relativity and Cosmology
[13] viXra:2406.0014 [pdf] submitted on 2024-06-04 19:17:36
Authors: Edgar Valdebenito
Comments: 3 Pages.
This document briefly discusses improper integrals of the second kind.
Category: General Mathematics
[12] viXra:2406.0013 [pdf] submitted on 2024-06-03 21:16:01
Authors: John Caywood
Comments: Pages.
A data processing technique has been used to discover a new Kaon composed of an anti-down (d’) and a strange (s) quark. Anti-down is indicated by d’ instead of the usual d with an overline. The reason is d’ can be stored in a database and overlined d cannot. The K+ kaon is symmetric with the K- kaon, but the K0 composed of a down and anti-strange does not have a symmetric anti-down and strange opposite.
Category: High Energy Particle Physics
[11] viXra:2406.0012 [pdf] submitted on 2024-06-03 21:03:31
Authors: Taeho Jo
Comments: 13 Pages.
This article proposes the modified KNN (K Nearest Neighbor) algorithm which receives a graph as its input data and is applied tothe text summarization. The graph is more graphical for representing a word and the text summarization is able to be viewed into a binaryclassification where each paragraph is classified into summary or non-summary. In the proposed system, a text which is given as theinput is partitioned into a list of paragraphs, each paragraph is classified by the proposed KNN version, and the paragraphs which areclassified into summary are extracted ad the output. The proposed KNN version is empirically validated as the better approach in deciding whether each paragraph is essential or not in news articles and opinions. In this article, a paragraph is encoded into a weighted and undirected graph and it is represented into a list of edges.
Category: Artificial Intelligence
[10] viXra:2406.0011 [pdf] submitted on 2024-06-03 21:03:18
Authors: Taeho Jo
Comments: 13 Pages.
This article proposes the modified KNN (K Nearest Neighbor) algorithm which considers the feature similarity and is applied to the text segmentation. The words which are given as features for encoding words into numerical vectors have their own meanings and semantic relations with others, and the text segmentation is able to be viewed into a binary classification where each adjacent paragraphpair is classified into boundary or continuance. In the proposed system, a list of adjacent paragraph pairs is generated by sliding atext with the two sized window, each pair is classified by the proposed KNN version, and the boundary is put between the pairs which are classified into boundary. The proposed KNN version is empirically validated as the better approach in deciding whether each pair should be separated from each other or not in newsarticles and opinions. The significance of this research is to improve the classification performance by utilizing the feature similarities.
Category: Artificial Intelligence
[9] viXra:2406.0010 [pdf] submitted on 2024-06-03 21:02:49
Authors: Taeho Jo
Comments: 12 Pages.
This article proposes the modified KNN (K Nearest Neighbor) algorithm which receives a string vector as its input data and isapplied to the text segmentation. The results from applying the string vector based algorithms to the text categorizations were successful in previous works, and the text segmentation is able to be viewed into a binary classification where each adjacent paragraph pair is classified into boundary or continuance. In the proposedsystem, a list of adjacent paragraph pairs is generated by sliding a text with the two sized window, each pair is classified by theproposed KNN version, and the boundary is put between the pairs which are classified into boundary. The proposed KNN version isempirically validated as the better approach in deciding whether each pair should be separated from each other or not in news articles and opinions. We need to define and characterizemathematically more operations on string vectors for modifying more advanced machine learning algorithms.
Category: Artificial Intelligence
[8] viXra:2406.0009 [pdf] submitted on 2024-06-03 21:02:38
Authors: Taeho Jo
Comments: 12 Pages.
This article proposes the modified KNN (K Nearest Neighbor) algorithm which receives a graph as its input data and is applied tothe text segmentation. The graph is more graphical for representing a word and the text segmentation is able to be viewed into a binaryclassification where each adjacent paragraph pair is classified into boundary or continuance. In the proposed system, a list of adjacentparagraph pairs is generated by sliding a text with the two sized window, each pair is classified by the proposed KNN version, and theboundary is put between the pairs which are classified into boundary. The proposed KNN version is empirically validated as thebetter approach in deciding whether each pair should be separated from each other or not in news articles and opinions. In this article, an adjacent paragraph pair is encoded into a weighted and undirected graph and it is represented into a list of edges.
Category: Artificial Intelligence
[7] viXra:2406.0008 [pdf] submitted on 2024-06-02 22:28:31
Authors:
Comments: 4 Pages. (Author name added to the article by viXra Admin as required)
Since Fermat’s equation,[(a^3+b^3 )=(c)^3 ]does not have a solution,we are considering the below two Diophantine equations:∶(a^3+b^3 )=w(c)^3 -----(1)(a^3-b^3 )=w(c)^3 -----(2)Also, equation (2) above has been discussed in the book by Tito Piezas (Ref. # 3).
Category: Number Theory
[6] viXra:2406.0007 [pdf] submitted on 2024-06-02 22:24:31
Authors: Nimit Theeraleekul
Comments: 14 Pages.
"Constancy light speed referenced to any initial frame" is one of the basic assumptions in Einstein special theory of relativity; provided with a physical mechanism will change it from just an assumption to a real natural phenomenon. Then what we got from the improving is that we could understand physical mechanism of "relativistic effect" which gives rise to relativistic mechanics. Indeed, improving physics theory by adding an appropriate mechanism is far-reaching; it could extend to Einstein general theory of relativity and quantum mechanical theory, which is then able to answer questions such as dark energy/matter, quantum entanglement, including Higgs. Finally it would pave the way to the theory of everything!
Category: Relativity and Cosmology
[5] viXra:2406.0006 [pdf] submitted on 2024-06-02 22:29:47
Authors: Chan Rasjid Kah Chew
Comments: 13 Pages. (Correction made by viXra Admin to conform with the requirements of viXra.org)
There is great misconceptions and confusion about how energy is transmitted by electric currents.The electric current carries no energy. It is the photon energy current within current-carrying conductors that transmits electrical energy. The magnetic fields surrounding current-carrying conductors play no part in electrical energy transmission. A simple classical derivation of Ohm's law is given. The working of the Zn/Cu Galvanic cell is examined; it is shown to be a photon generator.
Category: Classical Physics
[4] viXra:2406.0005 [pdf] submitted on 2024-06-02 22:13:15
Authors: Bryce Petofi Towne
Comments: 27 Pages.
This paper explores Illogical Classification-Based Thinking (ICBT) and its role in forming Positive, Negative, and Neutral Associations. Building on established theories such as the Halo and Horns Effects, and introducing Neutral Associations, this research examines how impressions lead to the automatic grouping of traits based on impressionistic judgments rather than logical reasoning. Using AI-generated images and a diverse participant pool, two studies were conducted: Study 1 confirmed the reliability of attractiveness categorizations, while Study 2 tested hypotheses related to trait associations. Results indicated that initial impressions significantly influence trait grouping across positive, negative, and neutral contexts, supporting the presence of ICBT. The integration of Kahneman's dual-process theory provided a comprehensive framework for understanding the cognitive processes involved. Findings have broad implications for social psychology, decision-making, consumer behavior, and organizational behavior, offering insights into how stereotypes and labeling are formed. Despite limitations such as sample representativeness and potential gender bias, this research contributes to a deeper understanding of impression-based judgments and cognitive categorization processes.
Category: Social Science
[3] viXra:2406.0004 [pdf] submitted on 2024-06-02 16:56:27
Authors: Tariq Khan
Comments: 5 Pages.
A short speculative philosophical essay resenting some unique or unorthodox descriptions of the nature of reality. A few example interpretations of reality are discussed, extending arguments from scientists including Julian Barbour, Donald D. Hoffman, and Roger Penrose involving the nature of time, consciousness, and fundamental reality. The idea of Platonic Supremacy is proposed where our reality coexists in a single universe with Platonic ideals. A universe without time is considered as well as one where Platonic ideals actually drive all conscious actions - instead of free will - as they interact with our minds.
Category: History and Philosophy of Physics
[2] viXra:2406.0003 [pdf] replaced on 2024-06-08 18:23:02
Authors: David M. Bower
Comments: 7 Pages.
By introducing three reference frames in addition to the the two reference frames typically used todiscuss Einstein’s special relativity (i.e., a "laboratory" frame and a boosted frame), we can show the utter futility of trying to explain (or to resolve) the absurdities of the twin paradox.
Category: History and Philosophy of Physics
[1] viXra:2406.0001 [pdf] submitted on 2024-06-01 18:57:25
Authors: Vansh Kumar
Comments: 16 Pages.
This paper introduces Vision, a novel 175-billion parameter multimodal AI model.Vision is trained from scratch to natively understand text, images, video, and audioand to generate text and images, setting it apart from existing models. Developedwith a focus on incorporating Indian context, values, and culture, Vision aims to em-power users with a culturally relevant AI experience. A unique security feature allowsgenerated images to be backtracked to Vision, mitigating concerns about potential mis-use for misinformation. Evaluations on standard benchmarks demonstrate that Visionachieves state-of-the-art performance in a diverse range of tasks, including reasoning,solving mathematical problems, code generation, and image understanding. Further-more, Vision exhibits remarkable proficiency in multilingual chat, supporting a widearray of global languages as well as regional Indian languages such as Hindi, Punjabi,and Marathi. We believe that Vision represents a significant step towards buildingmore inclusive and culturally relevant AI systems, with the potential to positively im-pact various domains in India and beyond.
Category: Artificial Intelligence