[117] viXra:2604.0120 [pdf] submitted on 2026-04-30 23:18:55
Authors: Riccardo Fantoni
Comments: 7 Pages.
In a recent publications I proposed a new statistical theory of gravity [Riccardo Fantoni, Quantum Reports, 6, 706 (2024)] which describes the fluctuations of the spacetime metric through a virial temperature. In a succeeding publication I discussed the foundations [Riccardo Fantoni, Stats, 8, 23 (2025)] of such theory. Here I propose a possible way to render numerically accessible the path integral Monte Carlo computations required in such Statistical Gravity theory. This requires the use of the Arnowitt, Deser, and Misner (ADM) splitting and of the Affine Quantization (AQ) method.
Category: Quantum Gravity and String Theory
[116] viXra:2604.0119 [pdf] submitted on 2026-04-30 23:15:01
Authors: Stanislav Komarovsky
Comments: 12 Pages.
We prove that for any formal verification of any real system, the correspondence between the formal proposition and the system it describescannot be established within any finite tower of formal languages. The proof follows from Tarski’s undefinability theorem applied iteratively: verifying that a proposition correctly describes a system requires ex-pressing a correspondence claim that, by Tarski’s theorem, cannot be formulated within the proposition’s own language. Expressing the correspondence in a richer metalanguage generates a new correspondence claim that cannot be formulated in the metalanguage, producing an infinite regress that no finite extension of the formal framework can resolve. The result is structural, not contingent on current tooling or the complexity of the target system. We discuss five caveats—concerning human knowledge, the value of formal verification, partial gap closure, varying assumption strength, and the functionalist objection—and draw implications for the verification of AI-generated software artifacts.
Category: Artificial Intelligence
[115] viXra:2604.0118 [pdf] submitted on 2026-04-30 05:12:54
Authors: Sayan Bairagi, Sayan Singha Roy, Abir Rakshit, Anik Bhowmick
Comments: 41 Pages.
This work presents a zero-knowledge credential framework designed to enable secure and privacy-preserving attribute verification across multiple independent systems. Theframework allows a user to prove statements of the form a ≥ t,where a ∈ Zq represents a secret attribute and t denotes a public threshold, without revealing the attribute value itself. At the sametime, the framework prevents the exposure of any globally stable identifier, thereby eliminating the risk of cross-domain tracking. The construction is based on Pedersen commitments, where each attribute is encoded as C = g^ah^r ∈ G, with G ⊆ Z^∗p denoting a cyclic group of prime order q. The generators g and h are selected such that the discrete logarithm relation between them is unknown. This ensures that the commitment is computationally binding under the discrete logarithm assumption and perfectly hiding due to the use of randomness r. As a result, the committed attribute remains concealed while still allowing verification of statements about it. Predicate verification is achieved using a sigma protocol, whichenables the prover to demonstrate knowledge of valid witnesses without revealing them. In particular, the protocol proves the relation C · g−t = g^δh^r, where δ = a − t. This transformation allows the system to verify threshold conditions such as a ≥ twithout disclosing the value of a. The zero-knowledge property of the protocol ensures that the verifier learns only the validity ofthe statement and no additional information about the underlying attribute or randomness.To prevent correlation of user activity across different verification domains, the framework introduces scoped pseudonyms defined as IDS = pkH(S), where pk = g^x is a public key derivedfrom a secret key x, and H is a cryptographic hash functionmodeled as a random oracle. The scope S represents a domain specific identifier. This construction produces a unique identifierfor each domain while ensuring that identifiers generated for different scopes cannot be linked without solving the discrete logarithm problem in G. Revocation is supported through an RSA accumulator constructed under the Strong RSA assumption. For a revoked set R={ri}, the accumulator value is defined as A = g^Qri mod N,where N is an RSA modulus. The system enables efficient non membership verification using witnesses derived from B´ezout coefficients1. This mechanism allows a verifier to confirm thata credential has not been revoked, while maintaining constant verification cost that does not depend on the size of the revoked set. (Truncated by viXra Admin)
Category: Artificial Intelligence
[114] viXra:2604.0117 [pdf] submitted on 2026-04-30 14:49:04
Authors: Bradley Stone
Comments: 7 Pages.
We present Infinite Flow Cosmology (IFC), a brane-world framework in which the observable universe is modelled as an open thermodynamic system embedded within a dynamic, radiating 5D Vaidya-Anti-de Sitter bulk spacetime. Rather than invoking a static cosmological constant, IFC drives late-time cosmic acceleration through continuous mass-energy flux projected from the bulk onto the 4D brane. Using the Shiromizu-Maeda-Sasaki projection formalism, we derive the modified Friedmann equations and a generalised energy conservation law governing bulk-brane exchange. We constrain the dimensionless bulk-brane coupling constant to α ≈ 2.2 by matching the observed late-time expansion history, and derive an effective quintessence-like equation of state w(z) ≳ −1 that is distinct from and testable against a pure cosmological constant. The model predicts a mild but measurable enhancement in the large-scale structure growth parameter fσu2088 at redshifts z < 0.5, forecasting a signal detectable by the Euclid satellite and DESI survey. Finally, we demonstrate that localised bulk flux preferentially couples to regions of extreme spacetime curvature, providing a two-channel gravity-based mechanism for super-Eddington black hole accretion that simultaneously explains anomalous mass growth rates and the persistence of bright X-ray coronae and radio jets — observations that challenge standard slim disk models
Category: Relativity and Cosmology
[113] viXra:2604.0116 [pdf] submitted on 2026-04-30 20:25:00
Authors: Ervin Goldfain
Comments: 13 Pages.
Rényi entropy is a generalization of the standard Shannon entropy and plays an important role in understanding complex dynamics of non-equilibrium systems. In line with [7], this brief report discusses the definition and application of Rényi entropy to Quantum Field Theory (QFT) and gravitational regime of primordial cosmology. In particular, we review the connection between Rényi entropy and entanglement entropy, on the one hand, and the maximal entropy principle (MEP), on the other. Remarkably, the latter enables derivation of the Higgs and W-boson masses in close agreement with experimental data. Appealing to multifractal analysis and the concept of generalized dimensions, we further bridge the divide between Rényi entropy and the "sum-of-squares" relationship of particle physics, corresponding to maximally entropy at index q=0. We emphasize the deep analogy between QFT entanglement and long-range correlations of complex dynamics and indicate that the MEP aligns with the universal route to chaos described by the Feigenbaum scenario. Finally, we review how Rényi entropy can be used to derive the four dimensionality of classical spacetime for q=1/2.
Category: Mathematical Physics
[112] viXra:2604.0115 [pdf] submitted on 2026-04-29 21:18:27
Authors: Riccardo Fantoni
Comments: 6 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
We formulate a new quantum many body simulation method for a general quantum fluid at any given temperature. Unlike path integral Monte Carlo our method evolves in imaginary time the density matrix from its initial delta function condition to its final thermal form at time equal to the inverse temperature. It does this with a molecular dynamics scheme on a classical Hamiltonian that has the same functional form as the one of the quantum mechanical Hamiltonian according to the properties of the continuous representation of John R. Klauder. We then end up with the thermal density matrix that can be used to extract thermal averages of observables using the Monte Carlo method equally well in any statistics.
Category: Data Structures and Algorithms
[111] viXra:2604.0114 [pdf] submitted on 2026-04-29 09:33:32
Authors: Andreas Ball
Comments: 17 Pages.
Firstly a very accurate formula for the Light Velocity is presented, at which the figures 144 and 666 are applied. An extremely accurate formula for the Light Velocity is presented in additional dependence of the figure 137. Furthermore an investigation about the figure 137 and its connection to other figures are given. The Physicist Arnold Sommerfeld investigated the spectral lines of the hydrogen atom and derived a value, which he called the Fine Structure Constant and which became famous by its reciprocal 137. The Physician Wolfgang Pauli and also the Psychoanalyst Carl G. Jung were occupied with the significance of the figure 137 by a metaphysical view. Why their undertaking was not devious, that shows a proof at the end of this report.
Category: Mathematical Physics
[110] viXra:2604.0113 [pdf] submitted on 2026-04-29 10:20:27
Authors: Bodo Lampe
Comments: 19 Pages.
A framework is analyzed where the Standard Model physics emerges as a low-energy effective field theory on a (7+1)D spacetime and Minkowski space is a monolayer composed of unresolvable tetrahedral unit cells at the Planck scale. We demonstrate that the SSB of the isospin cells into a chiral all-out configuration naturally generates a local O(4) isospin symmetry which governs the dynamics of the Higgs doublet and the mass hierarchy of the electroweak sector. By identifying the axial generators of the broken symmetry with massive vector bosons, we recover the electroweak phenomenology. The inherent handedness of the isospin vacuum provides a 7D origin for parity violation, establishing a direct link between the topology of the unresolvable spatial lattice and the observed chiral nature of weak interactions. The role of the O(4) symmetry of the Higgs potential is elucidated, and how the SM custodial symmetry is associated to neutrino masses, the latter being obtained from isospin-orbit coupling between isospins and the Planck lattice.
Category: High Energy Particle Physics
[109] viXra:2604.0112 [pdf] submitted on 2026-04-28 23:46:27
Authors: Ayush Samanta
Comments: 9 Pages. (Note by ai.viXra.org Admin: Please cite listed scientific references)
Fin geometry plays a critical role in determining the aerodynamic stability andperformance of rockets, yet excessive fin area may reduce efficiency through added drag. In this study I investigated the effect of fin geometry on rocket flight characteristics using ten simulated variants analyzed in OpenRocket under controlled launch conditions with four wind levels in a hypothetical situation where we are in Guadalajara, Spain. Metrics including apogee altitude, maximum velocity, acceleration, and time to apogee were compared across designs, revealing measurable tradeoffs between stability margin and ascent performance. The results indicate that moderate fin configurations can outperform both oversized and undersized designs by balancing stability with reduced aerodynamic losses.
Category: Classical Physics
[108] viXra:2604.0111 [pdf] submitted on 2026-04-28 23:35:04
Authors: Miroslav Pardy
Comments: 7 Pages. Original article
The problem of scattering of the spinless by the spinless particles is considered under the condition that no other particles are created during the interaction. The matrix element of the scattering is defined and it is extracted fromthe general action of the current-current form. The differential cross section is then determined and the limiting formulas of it for the very high energy and very low energy is derived.
Category: High Energy Particle Physics
[107] viXra:2604.0110 [pdf] submitted on 2026-04-28 11:14:18
Authors: Carlos Castro
Comments: 14 Pages.
After reviewing the basics of Non-inertial relativity theory based on the existence of a maximal proper force $b$, it allowed to postulate a modified Newtonian attractive gravitational force (and potential) which is $finite$ at the origin : $ | F ( r = 0 ) | = b$, and which vanishes at $r = infty$. Secondly, from the modified gravitational potential energy we were able to glean the expression for a running gravitational coupling $ G ( r ) $ which exhibits asymptotic-freedom-like properties : $ G ( r = 0) = 0$, and $ G ( r = infty) = G_N$. No quantum corrections were necessary to decrease the strength of gravity at short distances. Thirdly, we found that for very $large$ masses $m_1, m_2$ (compared to $ sqrt b $) the $threshold$ in the values of $r$ obeying $ kappa r^2 < < 1$, where the non-Newtonian regime becomes manifest, becomes larger and larger as $m_1, m_2$ become larger and larger. Whereas for very $small$ masses (compared to $sqrt b $) the $threshold$ in the values of $r$ obeying $ kappa r^2 < < 1$, where the non-Newtonian regime becomes manifest, becomes smaller and smaller as $m_1, m_2$ become smaller and smaller. In the $ b = infty$ limit one recovers the Newtonian gravitational force for all values of $ r>0$. These results were all possible by abandoning the weak equivalence principle at short distances.
Category: Quantum Gravity and String Theory
[106] viXra:2604.0109 [pdf] submitted on 2026-04-28 23:30:20
Authors: Riccardo Fantoni
Comments: 12 Pages.
We propose a new quantum simulation method for a many body quantum liquid of identical particlesat finite (non-zero) temperature. The new scheme expands the high temperature density matrix on the over complete set of single particles coherent states of John Rider Klauder instead of the usual plane waves as in conventional path integral methods. One is free to tune the elastic constant and/or the mass of the harmonic oscillator subtending the coherent states so as to maximize the computational efficiency of the algorithm. We prove that in the limit of an extremely stiff harmonic oscillator the results for the internal energy tends towards the correct expected values. Moreover we suggest that a stiff harmonic oscillator could allow the use of larger (imaginary) timesteps. This additional degree of freedom is the characteristic feature of our new algorithm and is not available in more conventional path integral methods.
Category: Data Structures and Algorithms
[105] viXra:2604.0108 [pdf] submitted on 2026-04-28 13:49:16
Authors: Vladimir Kuz'menko
Comments: 3 Pages.
Several interesting publications have appeared in the literature on the so-called "negative time" or "negative group delay" in experiments on the interaction of photons with cold atoms [1 - 6]. The physical meaning of this phenomenon is unclear. Such a physical explanation is proposed here and simple experiments for its further study are discussed.
Category: Quantum Physics
[104] viXra:2604.0107 [pdf] submitted on 2026-04-28 23:24:13
Authors: Russell P. Patera
Comments: 9 Pages.
Rodrigues’ Rotation Formula is used to rotate a vector based on the Axis-Angle parameterization of attitude transformation. Given an axis and angle about which an initial vector is rotated, Rodrigues’ Formula yields the final orientation of the vector. However, Rodrigues’ Formula does not include the associated rotation about the vector axis, which depends on the specific trajectory used in slewing the axis from its initial to final orientation. A theorem in attitude kinematics that was not available when Rodrigues developed his formula, contains attitude components that should be included with Rodrigues’ Formula. These attitude components were previously applied to the related problem of the Foucault Pendulum, which rotates with the Earth about the Earth’s spin axis. The methods to compute the rotation angle about the vector axis are the same as those used to compute the rotation of the Foucault Pendulum’s mounting fixture, since the trajectories are similar. This work presents the derivation of the formula for the missing rotation angle of Rodrigues’s Formula. Several numerical examples are presented to illustrate the use of the formula.
Category: Classical Physics
[103] viXra:2604.0106 [pdf] submitted on 2026-04-28 23:22:58
Authors: David Welker
Comments: 46 Pages. (Note by viXra Admin: Please cite and list scientific references and submit article written with AI assistance to ai.viXra.org)
Shaking Tangled Dimensions (TD) is a geometric and stochastic framework that seeks to unify particle physics, gravitation, and quantum behavior through a discrete, Planck-scale lattice of interconnected cores. Each core contains three spatial, three electrical, and three dark dimensions, whose local overlaps, bends, twists, and cross-connections are proposed to generate the observed structure of spacetime, particle properties, and force-like interactions. In this picture, fundamental particles arise as localized dimensional kinks and misconnections rather than as point-like objects existing independently of the lattice.Within TD, gravity is associated with the overlap volume and angular inclination of the spatial dimensions relative to a central reference point, while time is tied to local lattice update dynamics. Electromagnetic behavior emerges from the bending and twisting of the electrical dimensions, and photons, neutrino oscillations, spin, and several particle-family features are reinterpreted within the same geometric setting. Quantum uncertainty is attributed to continual Planck-scale lattice oscillations, providing a possible microphysical basis for the contingent ingredient in Barandes’ stochastic-quantum correspondence. Dark matter is treated primarily as a weakly self-ordering population of dark-dimensional kinks that contributes gravitationally through its effect on neighboring spatial overlap while remaining largely non-dissipative in the present cosmological epoch.Taken together, these ideas present TD as a developing candidate framework in which the phenomena normally assigned to separate formalisms may instead reflect different aspects of one underlying dimensional substrate. The model remains exploratory, but it offers a unified geometric program for relating spacetime structure, quantum contingency, particle behavior, and dark matter within a single lattice-based picture.
Category: Quantum Gravity and String Theory
[102] viXra:2604.0105 [pdf] submitted on 2026-04-28 23:19:12
Authors: Abdelmajid Ben Hadj Salem
Comments: 11 Pages.
In this article, we present the equations of the geodesic lines of a surface in $R^3$ and then we determine the calculation of the geodesic lines of the ellipsoid of revolution with a numerical example.
Category: Geometry
[101] viXra:2604.0104 [pdf] submitted on 2026-04-27 17:48:41
Authors: Abdelmajid Ben Hadj Salem
Comments: 117 Pages.
This booklet is the tome IV of my Selected Papers. It is my mathematical contributions in the field of the Number Theory during the last 12 years.
Category: Number Theory
[100] viXra:2604.0103 [pdf] submitted on 2026-04-27 17:35:47
Authors: Juraj Povazanec
Comments: 40 Pages. (Note by viXra Admin: Please DO NOT name any equation/formula after the author's name and submit article written with AI assistance to ai.viXra.org)
The vacuum energy problem is one of the central unresolved tensions between quantum theory and gravitation. Quantum field theory predicts a vacuum energy density near 10¹¹³ J/m³, which in General Relativity would generate severe spacetime curvature, yet observation finds space remarkably flat.This article argues that the discrepancy traces to a centuries-old conceptual error: the nature of energy itself has been misclassified.Energy has been treated as a universal scalar currency, a single real-valued quantity. Yet close examination of the quantum substrate reveals that energy possesses an irreducible two-component structure. Maxwell showed this through light's two orthogonal polarizations. Dirac demanded it with his positive and negative energy solutions.Consider the foundational statement: energy curves spacetime. If energy is complex, how does such curvature manifest? Equally along two internal axes. Within this framework, the vacuum, perfectly balanced, yields no net observable curvature—resolving the catastrophe.But what of the exquisite curvature GR predicts so accurately around planets and stars?Precisely there. In GR, energy curves spacetime. What produces this curvature? Planets and stars. The imbalance of the vacuum's perfect symmetry. Spacetime curvature emerges only where the two components of energy depart from equilibrium.What follows develops this construction and its implications for quantum gravity—one where the vacuum sets the speed of light through its total amplitude, and curvature arises solely from asymmetry.
Category: Quantum Gravity and String Theory
[99] viXra:2604.0102 [pdf] submitted on 2026-04-27 16:54:02
Authors: Norm Cimon
Comments: 19 Pages.
The impetus for the work is this quote:"...as shown by Gel’fand’s approach, we can only abstract a unique manifold if our algebra is commutative."[1]Geometric algebra is non-commutative. Components of different grades can be staged on different manifolds. As operations on those elements proceed, they can effect the promotion and/or demotion of components to higher and/or lower grades, and thus to different manifolds. This paper includes imagery that visually displays bivector addition and rotation on a sphere.David Hestenes interpreted the vector product or rotor in two-dimensions:"as a directed arc of fixed length that can be rotated at will on the unit circle, just as we interpret a vectoras a directed line segment that can be translated at will without changing its length or directionu2026"[2]Rotors can be used to develop addition and multiplication of bivectors on a sphere. For those rotational dynamics, rotors of lengthare the basis elements. The geometric algebra of bivectors — Hamilton’s "pure quaternions" — is thus shown to transparently operate on a spherical manifold.This paper also explores the possible generalizations that emerge from the placement of the graded elements which make up a geometric algebra onto separate manifolds.
Category: Geometry
[98] viXra:2604.0101 [pdf] submitted on 2026-04-25 22:42:36
Authors: Theo Adebayo
Comments: 6 Pages. (Note by viXra Admin: Please cite and list scientific reference and submit article written with AI assistance to ai.viXra.org)
This paper develops the ordered-structure core of the Theory of Residual Cancellation (TRC).The main idea is that, in a suitable ordered setting, the common part of two positive elements can be recovered from ordered difference and positive-part structure rather than assumed independently. Let G be an ordered abelian group equipped with a positive-part operation u → u+, and define u− := (−u)+. For x,y ∈ G, define the TRC common-part candidatem(x,y) := x−(x−y)+. Under a positive/negative-part decomposition axiom, this operation is symmetric and yields a two-sided residual/common decomposition. Under the additional assumption that the positive part map is monotone, the matched-content operation becomes monotone, maximal among common lower bounds, and equal to the meet on positive pairs. This gives an axiom-separated theorem ladder: one axiom governs the algebraic decomposition layer, while the second upgrades the decomposition into genuine order-theoretic meet recovery. The result identifies the abstract core of TRC as a compatibility principle between difference, positive part, and common part.
Category: Algebra
[97] viXra:2604.0100 [pdf] submitted on 2026-04-26 18:40:33
Authors: Sergey Y. Kotkovsky
Comments: 33 Pages. In Russian
As a base for creating nonlinear algebra, we use vectors — mathematical objects with some predefined general properties, but without defining these objects in terms of numbers or numerical matrices. Next, we build our algebra based on the vector multiplication operation. Our approach allows us to obtain new and more generalized conceptions of vectors, scalars and related objects of a mixed scalar-vector type — generalized quaternions. We propose a fundamentally new perspective on such familiar concepts as space, vectors, quaternions, complexity, parallelism, orthogonality, and dimension. Within the framework of new nonlinear algebra, geometric concepts such as parallelism and orthogonality acquire the operator meaning of vector commutativity and anticommutativity. The essence of the transition from linear to nonlinear representations lies in the transition from static geometric representations to operational ones. Vector cycles, which are triples of vectors cyclically connected to each other, occupy a special place in our algebraic system. The axiomatic framework we have constructed allows us to prove a number of statements important for the further development of the theory of nonlinear spaces.
Category: Algebra
[96] viXra:2604.0099 [pdf] submitted on 2026-04-26 18:37:17
Authors: Jorma Jormakka
Comments: 13 Pages. (Note by viXra Admin: Please cite and list scientific references of other authors)
The article points out several errors in the expanding space theory that is currentlybelieved to explain the cosmological redshift. The article proposes a tired light theory thatgives the correct reduction factor. This tired light theory includes a mechanism that slowsdown transitions of electrons in the atomic level. The mechanism is not explained: it is only proposed that the fine structure constant changes over long time periods. The articlesuggests that Cosmic Microwave Background radiation did not come from the Big Bang and that it comes from lost energy of photons traveling through the space as in tired light theories.
Category: Relativity and Cosmology
[95] viXra:2604.0098 [pdf] submitted on 2026-04-26 18:34:15
Authors: Wan-Chung Hu
Comments: 37 Pages. (Note by viXra Admin: Please cite listed scientific reference and submit article written with AI assistance to ai.viXra.org)
Similar to electroweak interaction, strong force and electromagnetism can have similar Higgs mechanism mediated interaction. Thus, gluons can acquire mass. And, neural colored gluons have larger mass than colored gluons. Total, we can have eight gluons without red-anti-red gluon. The puzzle of proton or neutron mass can be solved. We can also derive a new SU(5) model to include all the above eight gluons, three W/Z bosons, photon, Higgs boson, three generations of leptons and quarks to make a new 5x5 SU(5) model. Wightman axioms can be fulfilled in this new SU(5) without causing proton decay crisis. !6f matter field construction is used to put all the particles. We can also add the 4x4 four dimensional spacetime tensor integrating mass-energy density, light pressure, electric fields, and magnetic fields as well as four gradients to make a new contravariant SO(10) model. Weyl tensor and Ricci tensor related to the new SO(10) model are also given. Thus, grand unified theory or theory of everything can be obtained, that is compatible with four dimensional spacetime without extra-dimension needed in string theories.
Category: Mathematical Physics
[94] viXra:2604.0097 [pdf] submitted on 2026-04-26 18:30:17
Authors: Yuhua Li
Comments: 12 Pages. (Note by viXra Admin: Please cite listed scientific reference and submit article written with AI assistance to ai.viXra.org)
In this paper,we proved that for classic integral representation of Riemann $xi$-function $xi(s)=frac{1}{2}+frac{s(s-1)}{2}int_1^inftypsi(x)(x^{s/2-1}+x^{(1-s)/2-1})dx=-4psi'(1)+int_1^inftypsi'(x)((1-s)x^{s/2}+sx^{(1-)/2})dx=2int_1^infty(frac{3}{2}psi'(x)+xpsi''(x))(x^{s/2}+x^{(1-s)/2})dx$ , the common lower limitation 1 of the three divergent series equals each other (including $frac{-1}{2}$ for the first and $(-4)psi'(1)$ for the second).This provides a new approach to prove the three integral representation without using partial integration.
Category: Number Theory
[93] viXra:2604.0096 [pdf] replaced on 2026-05-05 14:00:49
Authors: Dimiter Dobrev
Comments: 6 Pages.
The modern definition of AI contains an inaccuracy. According to the definition we have nowadays, AI is a computer program which is successful. Indeed, for a computer program to be successful, it must be intelligent, but the opposite is not true. A program can be intelligent but not successful, merely because it pursues different goals and does not aim at the success in question. From a theoretical perspective, the modern definition of AI is good enough because it answers the question "What is AI?" even though it does not encompass all intelligent programs, but only some of them. From a practical standpoint, however, this definition is insufficient. The reason is that we are at the doorstep of creating True AI and among all intelligent programs we must choose the one we will be most comfortable with from now on. Thus, it is not a good idea to choose one of these successful programs. It would be better to choose a program that does not pursue victory at any cost. Such a program could be called a loser because it will not be successful enough. After all, both in humans and in AI relentless ambition is not a positive trait.
Category: Artificial Intelligence
[92] viXra:2604.0095 [pdf] submitted on 2026-04-24 19:59:00
Authors: Hongyuan Ye
Comments: 7 Pages. (Note by viXra Admin: Please cite and list scientific references)
[In the author's opinion,] Maxwell’s equations are an integration of divergence, curl, and classical electromagnetism. Based on [this] definition, divergence and curl can apply to static electromagnetic fields, [but] not to dynamic, time-varying ones[?]
Category: Mathematical Physics
[91] viXra:2604.0094 [pdf] submitted on 2026-04-24 19:37:18
Authors: Viktar Yatskevich
Comments: 20 Pages. figures 10, references 30
The Big Bang Theory (BBT) is currently widely accepted. The theory explains the expansion of galaxies by the action of "dark energy", a mysterious force that repels matter and causes space to expand at an accelerating rate. Within galaxies, the action of gravity prevails over the force of "dark energy" and no expansion occurs. This situation motivates the search for alternative physical interpretations based on verified observational data.A new physical interpretation of gravity is proposed to complement existing cosmological models. The theory is formulated both as a phenomenological framework and as a microscopic description explaining gravitational interaction at the atomic level. The proposed approach is constructed as a connected sequence of known physical phenomena, each of which has experimental confirmation. This provides a physically grounded basis for interpreting gravitational interaction across scales.A distinctive feature of the theory is its emphasis on the physical mechanisms underlying the formation and action of the gravitational field, while mathematical modeling is treated as a secondary descriptive tool. The theory offers a possible physical interpretation of the formation of cosmic structure driven by gravity alone. According to the proposed framework, gravitational interaction may manifest itself as either attraction or effective repulsion depending on the physical state of interacting matter. Within this interpretation, the observed expansion of the Universe may be described without invoking hypothetical forms of energy such as dark energy.
Category: Relativity and Cosmology
[90] viXra:2604.0093 [pdf] submitted on 2026-04-23 03:37:26
Authors: Richie DeMott
Comments: 8 Pages.
This manuscript offers a cross-scale framework for persistent formation in which dissipation-enabled capture, functional boundary, and coarse-grained inheritance jointly explain how stable higher-order units emerge from prior-scale dynamics under differing local mechanisms. The framework is interpretive and organizational, grounded in established physics—nonequilibrium thermodynamics, coarse-graining, and effective description—rather than proposing new fundamental laws.A central contribution is a functional boundary criterion combined with the concept of resolution-shifted identity. The paper argues that boundary appearance is systematically observer-relative due to timescale separation: systems may appear sharply localized or effectively diffuse depending on whether their characteristic recurrence dynamics are resolvable within the observer’s temporal window. This is formalized through a causality-normalized participation ratio, β* = L/(cτ), used to compare recurrence-governed participation across atomic, planetary, and galactic domains without invoking a universal scaling law.The aim of the manuscript is to clarify how persistent structure can be understood consistently across domains while remaining fully compatible with thermodynamics and effective-field-theoretic reasoning.
Category: Thermodynamics and Energy
[89] viXra:2604.0092 [pdf] submitted on 2026-04-24 00:26:38
Authors: Joe Y. Haskian
Comments: 33 Pages.
For many years, particle and nuclear physicists have been especially interested in the pion-nucleon interaction. The reason is twofold: the knowledge of pion-nucleon interaction comprises the foundation for any dynamical theory of nuclei and this knowledge provides clues to the understanding of the internal structure of hadrons.
Category: Nuclear and Atomic Physics
[88] viXra:2604.0091 [pdf] submitted on 2026-04-23 09:09:28
Authors: Payam Danesh, Raoul Bianchetti
Comments: 16 Pages.
This paper studies a linear kinetic equation on a periodic phase space with free transport in position and Ornstein-Uhlenbeck relaxation in velocity. The equation is formulated in the weighted Hilbert space associated with the Maxwellian equilibrium. In that setting, the paper establishes the dissipation identity, conservation of mass, semigroup well-posedness, microscopic coercivity in velocity, and exponential convergence to equilibrium on the zeromass subspace. The spatially homogeneous problem is treated separately, where the entropy law and exponential entropy decay follow directly from the Gaussian logarithmic Sobolev inequality. A final numerical section presents a Fourier-Hermite discretization and illustrates the same relaxation mechanism at the level of decay curves, spectral localization, and density profiles.
Category: Mathematical Physics
[87] viXra:2604.0090 [pdf] submitted on 2026-04-24 01:00:35
Authors: Sambuddha Majumder, Jayanta Majumder
Comments: 11 Pages.
dstr is a compact language for describing finite dynamic systems in a form that is both easyto write and semantically explicit. Its primary interface is a small s—expression DSL in which one states variables, initial conditions, actions, invariants, and reachability goals without the verbosity that often discourages exploratory modeling. The resulting description is not intended merely to support a checker run. It is intended to generate an explicit state graph that can subsequently be queried, transformed, filtered, and published. The central claim of this paper is that a state—action language becomes substantially more valuable when its semantics are treated as a first—class graph artifact, suitable not only for validation but also for downstream graph analysis and presentation.
Category: Data Structures and Algorithms
[86] viXra:2604.0088 [pdf] submitted on 2026-04-22 06:15:13
Authors: Tsutomu Hori, Manami Hori
Comments: 23 Pages, 13 Figures, 68 Equations, 10 References.
This paper conducts wave-making simulations for hydrofoils running at high speeds. The analysis is performed by constructing the wave-making Green's function due to a two-dimensional vortex filament by means of Fourier transform method. We adopt the developed Green’s function as the kernel function, and approach the problem using the boundary element method. The Green’s function is numerically computed by switching between three expansion forms, namely, Taylor expansion, continued fraction expansion, and asymptotic expansion, depending on the case.
Simulation calculations are performed on the lift force and wave-making resistance acting on the hydrofoil, the generated wave profile, the flow velocity vectors and the pressure distribution around the hydrofoil with a NACA airfoil. As a result, we gained concrete findings for the dependencies of wave-making phenomena upon the wing shape, running speed, submerged depth, angle of attack and other factors.
Category: Classical Physics
[85] viXra:2604.0087 [pdf] submitted on 2026-04-22 09:09:48
Authors: Moshe Segal
Comments: 14 Pages.
From the dawn of civilization Humans struggled to understand and explain their environment. The process of that struggle resulted in various branches of investigations and knowledge, which Humans denote as Science.Because the Human mind, which created this Science, is by itself composed of components which might be the same components that the environment is also composed of, Humans might be bound to never being able to understand and explain completely this environment, which is also denoted as the Existence, because one cannot explain an issue, if that explanation relies on this same issue. Nevertheless, Humans were still able to build a magnificent structure, of knowledge and explanations, which, as mentioned above, is denoted as Science which is still an ongoing endeavor, which continuously expands, reveals and explains more and more secrets embedded in the Existence.However, the nowadays Physics, which is the branch of Science which specifically focuses on understanding and explaining the Existence, is still composed of branches which are not fully compatible.One of these branches is the branch denoted as the Classic Physics, which focuses on understanding and explaining what is denoted as the Macroscopic Environment.Another such branch is the branch denoted as the Quantum Physics, which focuses on understanding and explaining what is denoted as the Microscopic Environment.And, as presented above, these two branches of Physics are not fully compatible, and Humans are still struggling, to expand the knowledge that might bridge the gap that still exists between these two branches of Physics.Moreover, even the branch of the Classic Physics itself, still embeds branches which are also not fully compatible, as for example, the Gravity and the Electromagnetism, and more must be done to bridge the gap that exists also between these two branches, which both belong, as stated above, to the branch of the Classic Physics.In the process of building the above-mentioned magnificent structure of Science, Humans detected in the Existence various components, and denoted these components as Entities, and used these Entities as the building blocks on which the Science of Physics was constructed.Two such Entities are the Entity of Space and the Entity of Time, which together were interweaved into one Entity, denoted as Einstein's four-dimensional Interwoven Space-Time Entity, by the Relativity Theories of Einstein.The branch of Classic Physics relates to the four-dimensional Interwoven Space-Time Entity as a real component, that really exists in the environment, or the Existence.Moreover, the branch of Classic Physics states that the environment embeds just one single four-dimensional Interwoven Space-Time Entity which is a real component of the environment, that really exists in the environment.However, this paper argues, that the branch of Quantum Physics, might also imply that the above-mentioned four-dimensional Interwoven Space-Time Entity might not be a component that really exists in the environment, or the Existence.Moreover, this paper also argues, that although the nowadays branch of Classic Physics does state that the four-dimensional Interwoven Space-Time Entity is a real existing Entity, a contrary conclusion can be also derived, a conclusion that the four-dimensional Interwoven Space-Time Entity might not be a component that really exists in the environment, or the Existence, and that contrary conclusion can be also derived from arguments based only on the nowadays Classic Physics.Thus, if it can be concluded that arguments based only on the Classic Physics might imply that the four-dimensional Interwoven Space-Time Entity does not really exists in the Existence, a conclusion which might be also in line, and compatible, with what the Quantum Physics also implies, then, by abandoning the conclusion that the four-dimensional Interwoven Space-Time Entity is a component that really exists in the environment, and by validating the assumption that the four-dimensional Interwoven Space-Time Entity might be only a facet or an attribute of another component of the Existence, which the Science of Physics denotes as Energy, this might provide also a lead for starting some bridging between the Classic Physics and the Quantum Physics, which, as already stated above, are still significantly incompatible.
Category: Relativity and Cosmology
[84] viXra:2604.0086 [pdf] submitted on 2026-04-22 20:37:16
Authors: Taiwei Song
Comments: 3 Pages.
Based on the Geometry of Spacetime Structures, this short paper briefly argues the importance of the neutron in the creation of natural spacetime. It also uses only the most fundamental natural constants to simply and precisely derive and calculate the energy levels and lifetimes of the neutron and deuteron 2H.
Category: High Energy Particle Physics
[83] viXra:2604.0085 [pdf] submitted on 2026-04-22 20:33:08
Authors: Timothy Jones
Comments: 2 Pages. (Note by viXra Admin: Please cite listed scientific references)
The point on a unit circle that is associated with the arc Pi/2 is (0,1). We prove the bidirectional: every ordered pair of positive real numbers (a,b) corresponds to a point on the circumference of a circle of radius square root of a^2+b^2 with an associated radius with an arc length less than Pi/2. If Pi/2 is rational this gives a contradiction.
Category: Number Theory
[82] viXra:2604.0084 [pdf] submitted on 2026-04-22 13:48:50
Authors: N. S. Alsaud
Comments: 7 Pages.
The theatrical framework of globoeconomics is introduced in this paper as a global generalization of the economical analysis. This coherent treatment scientifically explains many perplexing economical, political and societal phenomena which necessitates rereading the historical development of humankind from novel perspective. Since the approaches used in this treatment provide more plausible and simpler justifications, they are suitable to be adopted based on the philosophical reasoning of Occam's razor.
Category: Economics and Finance
[81] viXra:2604.0083 [pdf] submitted on 2026-04-22 20:28:08
Authors: Branko Zivlak
Comments: 3 Pages. 2 Tables (Note by viXra Admin: An abstract in the article is required)
Formulas are presented that connect known parameters in various fields of physics, thanks to the original concept and the Theory of Ruđer Bošković.
Category: Classical Physics
[80] viXra:2604.0082 [pdf] submitted on 2026-04-22 20:25:51
Authors: Victor Victorovich Oleksenko
Comments: 19 Pages. In Russian
This work presents a new physical paradigm based on the recognition of the substance "Nolekson" (Nl) — an inert gas occupying the 0-th position in the 0-th period of D.I. Mendeleev’s periodic table of chemical elements [1]. This is a quantum stationary cosmological model in which cosmic microwave background (CMB) photons and all particles are interpreted as vortex excitations (tori) in the Nolekson medium — an ultra relativistic gas filling all space [2, 3]. The mass of any particle or photon has the dimension of area [4, 5, 6] (kg = m^2), which resolves dimensional paradoxes and allows the Hubble constant (H_0) to be expressed via Planck constants. Precise relations are obtained linking the CMB temperature (T_CMB), Lorentz factors (γ ∼ 9×10^14 and γ^1 ∼ 7×10^14), the mean free path of noleksinos (L_nl — which is close to the perihelion of Jupiter, accounting for the barycenter, [3]) and the gravitational constant (G).
Category: Quantum Gravity and String Theory
[79] viXra:2604.0081 [pdf] submitted on 2026-04-22 20:22:16
Authors: Patrick Nod Glavin
Comments: 7 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
We present a constructive geometric proof demonstrating that a chain of Vesica Piscis figures following a doubling rule (R, 2R, 4R, u2026, 2n—1R) cannot be enclosed by a Vesica of radius 2nR that closes at the chain’s origin. The linear extent of n doubling iterations sums to 2n — 1, producing an irreducible remainder of exactly one base unit R. This remainder is invariant across all scales of the construction and cannot be eliminated by further iteration. We show that this non-closure property originates at the unit construction itself: the radius R, which generates the Vesica, cannot be consumed by the figure it produces. The result is established through direct lattice construction without appeal to limiting processes or infinite series. We discuss connections to self-similar fractal structures, aperiodic tilings, and renormalizationgroup theory. The construction arises from Geosectometry, a geometric framework developedby the author for investigating natural-state properties of constructive geometry.
Category: Topology
[78] viXra:2604.0080 [pdf] submitted on 2026-04-21 23:51:53
Authors: Viktor Strohm
Comments: 5 Pages. (Note by viXra Admin: Please cite scientific references of other authors)
A mechanical model of gravity is proposed within the framework of an energy medium, in which every body continuously absorbs energy, creating a sphere of reduced energy density around itself. The pressure gradient of this medium generates a force that exactly reproduces Newton’s law for an appropriate choice of coefficient. The model naturally explains capture into elliptical orbits, perihelion precession (through violation of additivity of density in the region of overlapping spheres — "synergy"), and provides a physical interpretation of gravitational redshift/blueshift, cosmological redshift, the Hubble constant, and the negative result of the Michelson—Morley experiment. Analytical and numerical formulas for calculating trajectories, including precessing rosettes, have been obtained.
Category: Astrophysics
[77] viXra:2604.0079 [pdf] submitted on 2026-04-21 23:49:01
Authors: Payam Danesh
Comments: 19 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
A Gamma—Bernoulli approach to the critical-line problem for the Riemann zeta function is developed. Starting from the Mellin—theta representation and the functional equation, one obtains explicit identities for the reflected Gamma quotient and for the regularization built into the Weierstrass product for Γ. On the Bernoulli side, the kernel (eu − 1)−1 is decomposed into its singular part and an analytic remainder, which yields a concrete zero-conditioned identity after continuation. The analysis shows that the harmonic divergence visible in raw finite Gamma products is a truncation phenomenon and therefore cannot by itself force Re(ρ) = 1/2. What remains is a coercive estimate which, if established, would convert the same mechanism into a critical-line theorem.
Category: Number Theory
[76] viXra:2604.0078 [pdf] submitted on 2026-04-21 23:47:05
Authors: Jean-Yves Boulay
Comments: 12 Pages. (Note by viXra Admin: Please cite scientific references of other authors)
Grounded in a novel mathematical framework, this study demonstrates that any Euclidean triangle can be uniquely categorized into one of four canonical classes based on the intersection of isosceles and right-angled characteristics. We prove that these four classes form a complementary entanglement capable of saturating a rectangular space without voids. Furthermore, this geometric configuration is shown to be isomorphic with specific numerical assemblies in number theory, establishing a direct link between the fundamental sequence of whole numbers and the stability of geometric structures.
Category: Geometry
[75] viXra:2604.0077 [pdf] submitted on 2026-04-21 20:16:07
Authors: Wim Vegt
Comments: 40 Pages.
This paper introduces Localized Intrinsic Field Equilibrium (LIFE), a unified field mechanical framework which posits that the photon is not merely a propagating wave, but a discrete electromagnetic wave packet maintained in dynamic equilibrium. Within this framework, field confinement is anisotropic: the electromagnetic forces maintaining equilibrium differ in the transverse and longitudinal planes. Consequently, the effective electromagnetic mass of the photon behaves as a vector quantity, dependent on the direction of propagation and external field interaction.Leveraging this framework, we propose a novel method for achieving ultra-high resolution photolithography by applying Electric Dipole Spin Resonance (EDSR) to bulk optical materials. While EDSR is traditionally utilized in quantum computing for single-electron spin manipulation, we demonstrate its application in a macroscopic "bulk" capacity to induce resonant light-matter coupling within a Sodium Chloride (NaCl) crystal lens at cryogenic temperatures. By driving the crystal lattice into a strong electromagnetic resonance, we alter the dispersion relation of the medium, creating a "Slow Light" regime where the propagation speed of light is reduced by a factor of 10 (v ≈ c/10).This massive deceleration results in a surge of the effective refractive index (n ≈ 10), which compresses the wavelength of standard red laser source light (650 nm) to an effective wavelength of 65 nm inside the lens. This hyper-refractive state allows the integrated lens to function as a solid immersion system with significantly enhanced optical power, projecting a demagnified image onto a silicon wafer 10 times smaller than the diffraction limit would normally permit. This approach offers a pathway to advanced integrated circuit scaling by achieving Extreme Ultraviolet (EUV)-class2resolution using standard optical frequencies, thereby bypassing the complexity and energy costs associated with high-energy photon sources.
Category: Quantum Gravity and String Theory
[74] viXra:2604.0076 [pdf] submitted on 2026-04-21 23:40:45
Authors: Wim Vegt
Comments: 47 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
Achieving stable magnetic confinement in nuclear fusion plasmas remains a profound theoretical and engineering challenge. Fundamentally, plasma confinement is a problem of macroscopic force density equilibrium. However, standard Magnetohydrodynamic (MHD) models treat mechanical fluid dynamics and electrodynamics as coupled but distinct systems, often relying on fragmented approximations that struggle to predict complex plasma instabilities.This paper introduces a unified theoretical framework that seamlessly integrates the mechanical fluid dynamics of the Navier-Stokes equations (governing mechanical pressure and fluid velocity) with the Local Interaction Field Equilibrium (LIFE) theory. By strictly expressing all physical interactions—including mechanical pressure, radiation pressure, electromagnetic field tensors, inertia, and gravitational coupling—in identical dimensions of force density (N/m3), we derive a single, continuous equilibrium field equation.This exact dimensional consistency eliminates the mathematical boundaries between the material plasma (Deuterium infusion) and the energetic confinement fields (microwave heating and magnetic containment). The resulting unified N/m3 equation provides a novel, rigorous analytical foundation for understanding high-energy plasma dynamics, offering new predictive pathways for mitigating instabilities in Tokamak and stellarator confinement systems.
Category: High Energy Particle Physics
[73] viXra:2604.0075 [pdf] submitted on 2026-04-20 22:17:01
Authors: Jose Risomar Sousa
Comments: 4 Pages.
I present a method to solve the general cubic polynomial equation based on six years of research that started when, in the fifth grade, I first learned of Bhaskara's formula for the quadratic equation. I was fascinated by Bhaskara's formula and naively thought I could replicate his method for the third degree equation, but only succeeded after countless failed attempts. The solution involves a simple transformation to form a cube and which, by chance, happens to reduce the degree of the equation from three to two (which seems to be the case of all polynomial equations that admit solutions by means of radicals).
Category: Algebra
[72] viXra:2604.0074 [pdf] replaced on 2026-04-30 11:54:02
Authors: Carl Littmann
Comments: 5 Pages.
Einstein said, "I want to know how God created this world. I want to know His thoughts..." [1] So, I think Einstein sought answers to such basic questions as: "Why don’t we have a universe of solely empty space, i.e., all nothingness, total ‘void’, with no matter nor energy in it, instead of our universe with some mass?" Finding good answers may seem impossible, but let’s creatively try to. Regarding the above question, for example, we note that the opposite of a ‘totally void universe’ is a ‘fully-filled universe’. But the ‘fully-filled’ option drags along with it, a huge load of other questions, possibilities, and problems to be worked out. Thus, perhaps those many other questions and issues, dragged along with it, is the main ‘reason’ why we have a universe with ’gross’ mass occupying only a very low fraction of all space, (i.e., only 10 to the power of -19 or less). But still, some fraction of space, indeed! So, below we formally present our analytic method, involving comparing different possibilities and their merits. And give examples of that. And we conclude, like Einstein, that "God did not ‘play dice’, so-to-speak, when ‘choosing our universe’."
Category: Quantum Physics
[71] viXra:2604.0073 [pdf] submitted on 2026-04-20 22:19:51
Authors: Floriano R. Pohlmann
Comments: 8 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
We propose an experiment to test whether the one-way speed of light is isotropic, without relying on clock synchronisation between separated stations. Two identical stations separated by a fixed distance exchange laser beams through complementary shutters driven by independent atomic clocks. The phase relationship between the two stations' light maxima directly encodes any asymmetry in one-way light speed. No synchronisation signal is required and the measurement is not circular. Relative clock drift between stations is monitored by observing whether the inter-station timestamp offset remains constant over the measurement interval. This offset is not interpreted as a one-way propagation time and does not constitute a synchronisation convention -- it serves solely as a stability diagnostic. The known frequency precision of atomic clocks, combined with regular monitoring of this offset, provides sufficient warranty of system integrity. The experiment is sensitive to any preferred-frame velocity component along the baseline. The true velocity of the apparatus through such a frame -- if one exists -- is unknown. Earth's orbital velocity of approximately 30 km/s is a lower bound only; contributions from the motion of the solar system and galaxy relative to other astronomical reference points could raise the true figure substantially. A null result would constrain preferred-frame theories. A non-null result would warrant careful independent replication.
Category: Relativity and Cosmology
[70] viXra:2604.0072 [pdf] submitted on 2026-04-19 20:12:15
Authors: Sahebabegum Pathan, Naveen K. Singh
Comments: 15 Pages.
In this work, we explore the python code to study the anharmonic oscillator. In obtaining the numerical solution for eigenvalues and eigen-functions, we discretize the spatial coordinates into the finite number of grid points and use the finite difference method. We observe that the effect of anharmonic term in hamiltonian in shifting energy level increases towards higher energy states. We compare eigen functions for ground, first excited and second excited state of the harmonic oscillator with anharmonic oscillator. Further, the corrected wave-functions using first order perturbationtheory are compared with wave-functions obtained from the numerical solutions of Schrodinger’s equations. The numerical Python code shows consistency of first order perturbation theory for the small value of anharmonic term.
Category: Quantum Physics
[69] viXra:2604.0071 [pdf] submitted on 2026-04-19 18:58:09
Authors: Jose Risomar Sousa
Comments: 12 Pages.
A generalization of the Riemann functional equation with a broader validity domain than the one available in the literature is introduced. The insight that led to this new relation came from a new formula for the zeta function created herein that implies the Riemann functional equation. A few minor developments that stem from new formulae introduced previously are also discussed.
Category: Number Theory
[68] viXra:2604.0070 [pdf] submitted on 2026-04-18 23:43:48
Authors: Xianxiang Duan
Comments: 3 Pages.
Following the analogy with the Feynman electromagnetic retardation potential formula,it is necessary to derive the energy conservation equation, which requires the concept of "repulsion." The four equilibrium forces governing Mercury's orbit consist of gravity, mass fieldforce, centripetal force, and another force; integrating this formula yields the energyconservation equation.
Category: Relativity and Cosmology
[67] viXra:2604.0069 [pdf] submitted on 2026-04-18 08:01:34
Authors: Jorma Jormakka
Comments: 8 Pages.
In relativity theory the advance of an atomic clock in a GPS satellite of some38 microseconds in 24 hours is explained by time dilation because of velocity from the Special Relativity Theory and time dilation in a gravitational field from the General Relativity Theory. The article shows that the time advance of a GPS satellite can be explained without any relativity theory and the formulas are in the first order the same as inthe relativity theory. It is also shown that the calculation of the GPS clock advance from the relativity theory is incorrectly made: in relativity theory there is the equivalence principle and because of it, the time dilation caused by acceleration due to the curved orbitmust be included. When it is included, the clock advance is incorrect. It is also shown thatEinstein's 1907 calculation of the gravitational redshift is incorrectly made, though theresult is correct.
Category: Relativity and Cosmology
[66] viXra:2604.0068 [pdf] submitted on 2026-04-18 23:35:24
Authors: Rusin Danilo Olegovich
Comments: 5 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
We introduce and study the entire function $lambda(s) = sum_{n=1}^{infty} n^{s+1/2}/n^n$,defined by a Dirichlet-type series with super-exponential coefficients. We prove that$lambda(s)$ converges absolutely for all $s in mathbb{C}$, uniformly on compact sets,and is therefore an entire function of order zero. We establish a closed-form evaluationof the special value $lambda(-1/2) = int_0^1 x^{-x}, dx$, connecting $lambda$ to theclassical Sophomore's Dream identity of Bernoulli. We further prove that $lambda(s)$ isterm-by-term differentiable, with $lambda^{(k)}(s) = sum_{n=1}^{infty} (ln n)^k n^{s+1/2}/n^n$for all $k geq 0$, justified by the Weierstrass $M$-test. Finally, we propose a conjectureconnecting $lambda(s)$ to the Riemann zeta function.
Category: Number Theory
[65] viXra:2604.0067 [pdf] submitted on 2026-04-18 15:13:11
Authors: Matt Guiney
Comments: 10 Pages.
Based on numerous physical experiments that can easily be performed by anyone, the arrangement of different matter of varying densities which are contained in a vessel are always arranged in a particular order when a force is applied to that vessel and that vessel is put into motion. The direction of the motion dictates the density gradient orientation. Based on other experiments it is also apparent that this arrangement of matter (based on density and dependent on the direction of motion) ceases when objects are in free fall. This would imply that when objects are in free fall there is no force present that is responsible for the apparent downward motion of objects, but rather the reference frame (earth) is moving upwards to meet the objects. This concept is explored (the upwards motion of the earth) and calculations are performed to understand how we (standing on earth) can perceive a downward acceleration (g) of objects in free fall if the earth is in motion upwards. Other concepts regarding the motion of the earth are also discussed.
Category: Relativity and Cosmology
[64] viXra:2604.0066 [pdf] submitted on 2026-04-18 20:11:48
Authors: Clark M. Thomas
Comments: 7 Pages.
Telomeres in humans are caps at the two ends of chromosomes that largely determine our normal life expectancy. Chromosomal senescence typically approaches before our death, and no amount of botox can stop it. Interestingly, some people live longer than their seventies, with delayed senescence. Even though quality of life is more important than quantity, most of us want both quality and quantity, with minimal time spent in senescence. To what degree could this optimum be under our control? If we all could approach seeming immortality, what could that life extension mean for our individual and social lives?
Category: General Science and Philosophy
[63] viXra:2604.0065 [pdf] replaced on 2026-04-21 04:29:05
Authors: Kenneth C. Johnson
Comments: 14 Pages. Added link to MATLAB code implementation.
The Magnus-exponent method of solving non-autonomous (variable-coefficient) coupled linear differential equations is reviewed, and three quadrature approximation formulas are derived with residual errors proportional to the 3rd, 5th, and 7th power of the integration interval size.
Category: General Mathematics
[62] viXra:2604.0064 [pdf] submitted on 2026-04-17 20:33:12
Authors: Joao Carlos Holland De Barcellos
Comments: 16 Pages.
We will show that, within the "Decreasing Universe Theory" (DUT), time inside a gravitational field isaltered in the same way as spatial distance measurement scales. At the cosmic level (timescales of millions of years), this effect becomes perceptible and explains the variation in the measured duration of Type Ia Supernova (SN1A) explosions.
Category: Relativity and Cosmology
[61] viXra:2604.0063 [pdf] submitted on 2026-04-17 17:15:56
Authors: Ryan Hackbarth
Comments: 4 Pages.
In this paper, I demonstrate how the Dirichlet Eta function may be represented as a sum of its fundamental frequencies through the use of a power series whose coefficients are the Euler Product representation of the Riemann Zeta function.
Category: Number Theory
[60] viXra:2604.0062 [pdf] submitted on 2026-04-16 18:44:39
Authors: Ming Yang, Zhiwei Zhang, Jiahang Li, Haoseng Liu, Yuzheng Cai, Weiguo Zheng
Comments: 37 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
Presentations are a primary medium for scholarly communication, yet most AI slide generatorsoptimize the artifact (a visually plausible deck) while under-optimizing the delivery process (pacing, narrative, and presentation preparation). We present DeepSlide, a human-in-the-loop multi-agent system that supports preparing the full presentation process, from requirement elicitation and time-budgeted narrative planning, to evidence-grounded slide—script generation, attention augmentation,and rehearsal support. DeepSlide integrates (i) a controllable logical-chain planner with per-node timebudgets, (ii) a lightweight content-tree retriever for grounding, (iii) Markov-style sequential rendering with style inheritance, and (iv) sandboxed execution with minimal repair to ensure renderability. We further introduce a dual-scoreboard benchmark that cleanly separates static artifact quality from dynamic delivery excellence. Across 20 domains and diverse audience profiles, DeepSlide matches strong baselines on artifact quality while consistently achieving larger gains on delivery metrics,improving narrative flow, pacing precision, and slide—script synergy with clearer attention guidance.
Category: Artificial Intelligence
[59] viXra:2604.0061 [pdf] submitted on 2026-04-17 00:26:11
Authors: Xiaofeng Hu
Comments: 6 Pages. (Note by viXra Admin: Please cite and list scientific reference and submit article written with AI assistance to ai.viXra.org)
The Collatz conjecture states that for any given positive integer N,if N is even, divide it by 2; if N is odd, multiply it by 3 and add 1.Repeating this process,N will eventually become 1. This paper provesthat any positive odd integer O other than 1 cannot return to itself nomatter how many times the iteration is performed. We derive the generalformula satisfying this condition and rigorously prove by mathematicalinduction that this formula equals 1 uniquely in the set of positive oddintegers. We thus conclude that there are no non-trivial periodic orbitsin the Collatz mapping.
Category: Number Theory
[58] viXra:2604.0060 [pdf] submitted on 2026-04-15 20:13:47
Authors: Paul Robert Mesler
Comments: 16 Pages. (Note by viXra Admin: Please cite and list scientific reference and submit article written with AI assistance to ai.viXra.org)
Euler’s first law states that only external impulses can increase the center of mass momentum of a multi-body system. However, after 30 test trials the mean value of known external friction impulses acting on a three-body system accounted for only ~ 8.2 per cent (standard deviation .037 and standard error .0068) of the increase in the final momentum of the system, leaving a ~ 91.8 per cent discrepancy. The three-body system consisted of two spheres, constrained to roll around quarter-circle barriers attached to a third body. As the spheres rounded the curves, centripetal contact forces acted on the spheres while equal and opposite centrifugal reactive contact forces acted on the inner walls of the curved barriers. These centrifugal reactive contact forces caused the system to accelerate, inducing an unknown external impulse that increased the `orbital angular speed of the spheres with respect to our laboratory inertial frame. We derive a general impulse equation that demonstrates how the impulse history of centrifugal reactive contact forces couples to the increase in angular speed of the spheres due to this unknown external impulse and how this couples to the increase in momentum of the system. We suggest that more rigorous experiments in friction-free environments be conducted to see if the increase in momentum persists after all external friction impulses are removed which may shed light on the nature of the unknown external impulse.
Category: Classical Physics
[57] viXra:2604.0059 [pdf] replaced on 2026-04-26 07:19:04
Authors: Fei Ding, Yongkang Zhang, Yeling Peng, Youwei Wang, Guoxiong Zhou, Zijian Zeng
Comments: 8 Pages.
Reinforcement learning for multi-step reasoning with large language models (LLMs) often relies on sparse terminal rewards, leading to poor credit assignment conditions where the final feedback is evenly propagated across all intermediate decisions. This results in high gradient variance, unstable training, and numerous ineffective updates, ultimately causing the model to fail and preventing sustained improvement. We introduce a counterfactual comparison-based credit assignment framework, which samples multiple reasoning trajectories under the same input. By treating their differences as an implicit approximation of alternative decisions, we construct an implicit process-level advantage estimator that transforms sparse terminal rewards into step-sensitive learning signals. Based on this, we propose Implicit Behavior Policy Optimization (IBPO), which significantly improves training stability and performance upper bounds on mathematical and code reasoning benchmarks, pointing to a promising direction for unlocking the performance potential of LLMs.
Category: Artificial Intelligence
[56] viXra:2604.0058 [pdf] submitted on 2026-04-15 20:10:37
Authors: Fei Ding, Yongkang Zhang, Youwei Wang, Zijian Zeng
Comments: 8 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
In sparse termination rewards, intra-group comparisons have become the dominant paradigm for fine-tuning reasoning models via reinforcement learning. However, long-term training often leads to issues like ineffective update accumulation (learning tax), solution probability drift, and entropy collapse. This paper presents a necessary condition for algorithm design from a token-level credit assignment perspective: to prevent reward-irrelevant drift, intra-group objectives must maintain gradient exchangeability across token updates, enabling gradient cancellation on weak-credit/high-frequency tokens. We show that two common mechanisms disrupting exchangeability make "non-cancellation" a structural norm. Based on this, we propose minimal intra-group transformations to restore or approximate the cancellation structure in the shared token space. Experimental results demonstrate that these transformations stabilize training, improve sample efficiency, and enhance final performance, validating the value of this design condition.
Category: Artificial Intelligence
[55] viXra:2604.0057 [pdf] submitted on 2026-04-15 20:07:22
Authors: Jay Kumar
Comments: 7 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
Modern data processing increasingly relies on workflows centered on filtering, transforming, aggregating, and routing structured data, yet mainstream programming paradigms do not always treat explicit data flow and composable transformation as primary computational concerns. This paper synthesizes three related lines of inquiry: the conceptual formulation of Flow-Oriented Programming (FLOP), the design and implementation of XF as an experimental realization of that model, and the empirical evaluation of XF across representative data-oriented workloads. Through this synthesis,the paper examines the historical development of programming languages and paradigms through the lens of composability, abstraction, and data transformation, and positions flow-oriented programmingas a data-first model centered on explicit transformation pipelines. XF is then considered as a practical implementation of this model, emphasizing composable operations, decentralized control flow, and explicit value-state semantics. Benchmark results across CSV aggregation, transformation pipelines, pattern processing, and concurrency scaling suggest that while XF does not yet match the raw performance of mature language ecosystems, it demonstrates that flow-oriented programming is both expressively coherent and practically implementable as a distinct approach to data-intensive computation.
Category: Data Structures and Algorithms
[54] viXra:2604.0056 [pdf] submitted on 2026-04-15 08:10:21
Authors: Xianming Meng
Comments: 13 Pages.
The results of numerous Bell tests are viewed as evidence of quantum entanglement: the instant inner connection between totally separated entangled particles, which explains the ‘spooky action at a distance’, a term coiled by Einstein. The paper proposes a variation to Bell tests, aiming to prove or disprove spooky action from a distance. The main variation is an added pair of half-waveplates, with which we can change the relative polarization angle of the entangled photon pairs and assess its impact. The paper has derived the quantum prediction of Bell tests due to the changes in both the relative polarization angle of the entangled photon pairs and in the relative angle of the polarization measurement devices. If the quantum prediction is proven correct by experiments, the claim of instant inner connection between entangled particles or spooky action from a distance must be false.
Category: Quantum Physics
[53] viXra:2604.0055 [pdf] submitted on 2026-04-15 08:14:33
Authors: Xianming Meng
Comments: 11 Pages.
The concepts of quantum superposition and entanglement are at the heart of quantum mechanics, but they often cause confusion. This paper shows that the concepts are intimately related to quantum coherence. Through examining the traditional arguments, the paper reveals that the essence of quantum superposition is a statistical superposition of coherent states, while quantum entanglement results from the coherence of entangled particles as well as the common rule governing the measurements.
Category: Quantum Physics
[52] viXra:2604.0054 [pdf] replaced on 2026-04-20 17:45:12
Authors: Vladislav Smolenskij
Comments: 3 Pages. Added References section as requested by viXra Admin
The flying arrow paradox states that motion is impossible, as the arrow is motionless at every instant of time. Although the paradox has a mathematical solution, a physical uncertainty remains: is time continuous, or does it consist of numerous infinitesimally small intervals? Within the framework of classical mechanics, it is shown that the assumption of instantaneous rest (zero velocity at any moment) inevitably violates the laws of conservation of momentum and kinetic energy. A flying arrow cannot stop or resume its motion without external input. Consequently, the arrow cannot be "at rest" within an infinitesimally small interval of time, and/or time does not contain any "zeros" and flows continuously.
Category: Classical Physics
[51] viXra:2604.0053 [pdf] submitted on 2026-04-15 15:54:45
Authors: Martin Kraus
Comments: 8 Pages.
While the Schrödinger equation is one of the most important equations in physics, its interpretation is still being debated even one hundred years after its original publication. After summarizing Schrödinger's vibrational interpretations, this work remarks on three specific features that might be of particular interest for the development of future vibrational interpretations.
Category: History and Philosophy of Physics
[50] viXra:2604.0052 [pdf] submitted on 2026-04-15 19:40:31
Authors: S. Mayank
Comments: 3 Pages. (Note by viXra Admin: Please cite and list scientific references)
This paper presents a novel iterative representation of Fermat numbers, defined by the sequence Fn = 2^(2^n) + 1. By leveraging the fundamental recurrence relation F(n+1) - 2 = Fn(Fn - 2), we define a functional equation x = A/x - 2, where A = F(n+1) - 2. We demonstrate that this equation yields two integer solutions, x1 = Fn - 2 and x2 = -Fn. Through an analysis of the derivative of the map f(x) = A/x - 2, we prove that x = -Fn is an attractive fixed point and x = Fn - 2 is a repulsive fixed point, leading to a unique, convergent infinite continued fraction for the negative of any Fermat number. This provides a bridge between the rapid growth of Fermat sequences and the stability of iterative rational functions.
Category: Number Theory
[49] viXra:2604.0051 [pdf] submitted on 2026-04-14 20:52:20
Authors: Leonardo Rubino
Comments: 8 Pages. In English and English (Note by viXra Admin: Please cite and list scientific references)
The Twin Paradox [is] Explained With the Acceleration.
Category: Relativity and Cosmology
[48] viXra:2604.0050 [pdf] submitted on 2026-04-14 16:14:36
Authors: Kiyoung Kim
Comments: 7 Pages.
From the perspective of objective natural philosophy, cosmology should adopt a broader view consistent with nature's fundamental properties, such as continuity and cyclicity. It should move beyond the claim that the entire universe originated from a single event—an explosion of pure energy—that birthed time and space solely within the framework of general relativity. Such a shift suggests that our universe may be one of many, each following its own life cycle. We first review dark energy—long a subject of debate regarding its true nature and role in the universe's accelerating expansion—and explain how this expansion is possible within a 4-D complex space model. Additionally, we review the second law of thermodynamics to explain how the entropy of a cyclic universe evolves.
Category: History and Philosophy of Physics
[47] viXra:2604.0049 [pdf] submitted on 2026-04-14 20:24:27
Authors: Christoper Mututu
Comments: 13 Pages. (Note by viXra Admin: For the last time, please cite and list scientific references)
We introduce a deterministic construction for generating composite number pairs (A,B) from strictly isolated prime sextets, configurations of exactly six primes situated at fixed offsets {0,8,14,18,24,32} from a base value a≡9 (mod 10) with no additional prime existing anywhere within the interval [a,a+32]. We term such configurations strictly isolated prime constellations of order six.The structural constraint a≡9 (mod 10) forces the six primes to terminate in the digit pattern 9,7,3,7,3,1 respectively which is a consequence of the fixed offsets modulo 10. These six primes are arranged into a 2×4 rectangle whose columns are indexed by the digits {1,3,7,9}, the complete set of possible terminal digits of any prime greater than 5. Column wise addition and subtraction yield a Sums row and a Difference row from which the composite A and B are defined by their respective totals.We prove that this construction satisfies four universal invariants. First, the closed form identities A=6a+96 and B=2a+52 hold for every valid cluster. Second, A is always divisible by 1,2,3,5 and 6 while B is always divisible by 1,2,5 and 10, both following algebraically from a≡9 (mod 10). Third, the decimal expansion of A/B always carries a signature 2.9u2026 and B/A always carries initial signature 0.3u2026 for all a≥274 proven via closed form analysis. Fourth, both A/B and B/A always produce non terminating repeating decimal expansions guaranteed by the arithmetic structures of the reduced denominators.These four invariants are established algebraically and confirmed computationally across 17,138 valid clusters up to 100 billion with zero failures on every claim.
Category: Number Theory
[46] viXra:2604.0048 [pdf] submitted on 2026-04-14 19:25:48
Authors: Amal Ladjeroud
Comments: 10 Pages.
Hilbert-Pólya conjecture is proved by constructing Hilbert-Pólya operator, the self-adjoint operatorwhere its eigenvalues are exactly the imaginary parts of zeros of Riemann zeta function on the critical line. Hence, the Riemann hypothesis is true.
Category: Number Theory
[45] viXra:2604.0047 [pdf] submitted on 2026-04-13 20:43:28
Authors: Pranshu Tripathi
Comments: 3 Pages. (Note by viXra Admin: Please cite and list scientific references)
The collatz conjecture was introduced by Lothar collatz in 1937. It is also known as "3n + 1 problem". The conjecture states: Start from any positive integer, n. If n is even, divide by 2; if n is odd, multiply by 3 and add 1. Now, the conjecture says that if you keep repeating above steps, you will finally reach 1, no matter what value of n we choose. In this paper, we prove collatz conjecture using method of mathematical induction. We also use binary and trenary numbers to prove collatz conjecture.
Category: Number Theory
[44] viXra:2604.0046 [pdf] replaced on 2026-04-25 17:53:35
Authors: Michael A. Ivanov
Comments: 6 Pages.
Based on the low-energy quantum gravity model, it is shown that the overestimation of the Hubble constant at small z, characteristic of the cosmological model, can be eliminated using a two-parameter luminosity distance function that takes into account the change in the number of photons. This new function was fitted to a similar function in the model, which best describes observations. Estimates were obtained for the light attenuation parameter, which replaces the effect of dark energy, and for the Hubble constant in the new model without cosmological expansion. Such a resolution of the Hubble crisis could cause a serious conceptual crisis in modern cosmology.
Category: Quantum Gravity and String Theory
[43] viXra:2604.0045 [pdf] submitted on 2026-04-13 02:16:39
Authors: Hongyuan Ye
Comments: 15 Pages. (Note by ai.viXra.org Admin: Author name is required in the article after article title; and please cite and list scientific references)
Maxwell's equations theoretically introduce the hypothesis of "displacement current", stating that in a vacuum, a changing electric field can induce a changing magnetic field. Based on this, Maxwell predicted the existence of "electromagnetic waves" in a vacuum. The latest research indicates that the theory of "electromagnetic waves" has never been experimentally verified. The experiment conducted by German physicist Hertz in 1887 did not prove the existence of "electromagnetic waves" but rather demonstrated that wireless communication could be achieved by independent "electric field waves". Further research reveals that the hypothesis of "displacement current" is theoretically inconsistent. The simplest and most convincing way to verify the correctness of Maxwell's "electromagnetic wave" theory is to independently measure the electric field intensity and magnetic field intensity of electromagnetic radiation in wireless communication, and then determine through experiments whether the electric field energy density is equal to the magnetic field energy density. This verification experiment is based on the 3-meter method of EMC electromagnetic compatibility standards. A half-wave dipole dual-antenna configuration with reverse attenuation is used to cancel out the magnetic fields generated by the conduction currents of the two antennas, and the electric field intensity and magnetic induction intensity in the far-field of electromagnetic radiation are independently measured. The verification experiment shows that the electric field energy density is 137.6 times that of the magnetic field energy density, and the experimental result is 137.6 times the theoretical value of the "electromagnetic wave" theory. In this verification experiment, the relative error of the electric field intensity measurement is +/- 5%, and that of the magnetic field intensity measurement is +/- 40%. The experimental results are real and valid. This experiment fully proves that, whether in the near-field or far-field, the electromagnetic fields originate from the conduction current of the antenna. In a vacuum, without the participation of charges, a changing electric field cannot generate a changing magnetic field, and a changing magnetic field cannot generate a changing electric field. "Electromagnetic waves" do not exist in the objective physical world.
Category: Classical Physics
[42] viXra:2604.0044 [pdf] submitted on 2026-04-13 01:03:23
Authors: A. V. Kaminsky
Comments: Pages.
We propose an ontological interpretation of quantum mechanics based on the principle of subjective incompleteness—a fundamental limitation arising from the fact that the observer is part of the world being observed. By formalizing consciousness as a set of states Subj and the world as a self-mapping W:Subj→Subj, we construct an ontological configuration space whose structure naturally gives rise to the Hilbert space of quantum states. Within the proposed model, the hidden parameters in the representation of an observable operator are the eigenvalues of its canonical conjugate. In particular, the coordinate and momentum representations complement each other to form a complete ontological description, with the corresponding variables appearing as mutually hidden. From this perspective, the phenomenon of duality in physics, exemplified by pairs such as coordinate—momentum or time—energy, reflects the underlying subject—object structure of reality.The model offers a new justification of Bohr’s principle of complementarity and provides a geometrical account of noncommutativity in terms of subjective incompleteness. Furthermore, the entropic uncertainty relations of Hirschman—Everett are reinterpreted as quantitative measures of subjective incompleteness. This approach links the growth of thermodynamic entropy with the "motion" of the observer’s consciousness along the gradient of ontological states, thereby providing a natural explanation of the thermodynamic arrow of time.Thus, the key features of quantum mechanics emerge from the fundamental principle of subjective incompleteness. This article continues a series of works devoted to the role of the observer and consciousness in physics.
Category: Quantum Physics
[41] viXra:2604.0043 [pdf] submitted on 2026-04-11 21:55:05
Authors: Tanuj Kumar, Vandana [Doe]
Comments: 16 Pages.
We extend the bounded-vacuum framework introduced in Ref. [1] by incorporating the intrinsic dynamical properties of localized vacuum configurations and deriving a unified effective field equation for the vacuum potential Φ(x,t). In this approach, matter is identified as a localized vacuum loading corresponding to a deficit of the vacuum potential. We show that stable vacuum loading configurations have their own intrinsic degrees of freedom associated with vibrations of definite frequency ω_0; small deviations from the equilibrium satisfy the Klein—Gordon-type equation, so that the dispersion relation gives the known energy-momentum dependence E^2=p^2 c^2+m^2 c^4, where mass originates due to the condition ℏω_0=mc^2. The field equation under discussion includes not only the wave propagation, but the effect of gravity and the restoring force which represents a sort of the vacuum capacity limitation; in such a way we get the unified treatment of the problem of both massive and massless particles. Within the nonrelativistic limit, the theory turns out to be nothing else but the Schrödinger equation with the effective potential associated with fluctuations of the vacuum potential.
Category: Quantum Gravity and String Theory
[40] viXra:2604.0042 [pdf] submitted on 2026-04-11 21:48:39
Authors: Kenneth A. Watanabe
Comments: 19 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
This paper presents a formal proof of Brocard’s Conjecture, which posits that there are at least four prime numbers between the squares of any two consecutive primes pi2 and p_i+12 for i>1. By defining the function π*(n) that approximates the prime counting function π(n), we establish a lower bound for the number of primes in these intervals. Using mathematical induction, we demonstrate that the minimum number of primes in the interval, Δπ*(pi), is consistently greater than or equal to 4 for all pi ≥ 3. The proof is further supported by a rigorous error analysis, bounding the maximum possible deviation between the estimated prime count π*(n) and the actual prime counting function π(n).
Category: Number Theory
[39] viXra:2604.0041 [pdf] submitted on 2026-04-11 21:46:53
Authors: Ikechukwu Iloh Udema
Comments: 23 Pages. [New version of] ChemRxiv via 10.26434/chemrxiv-2023-0sjpl
There were conflicting definitions and misrepresentations of turnover frequency (TOF), catalytic cycle frequency (CCF), and catalytic first-order rate constant (k_cat) in the literature. Based on the Benfield and Lineweaver-Burk methods, data were generated. The results indicated that the CCFs for the forward (~ 0.0025-1.58 exp. (+17)/s) and reverse (~ 0.0003-5.4 exp. (+18)/s) directions showed an increasing trend with higher concentrations of the enzyme; this was applicable to the TOF. The number of fragments per molecule of an enzyme in the forward direction was 0.19-1.23 exp. (+5). In conclusion, TOF and CCF are different parameters, and the former in particular was not the same as k_cat, while the latter was a constant; TOF and CCF vary. Strictly speaking, TOF was equal to f(M_3). Besides, CCF per molecule (> 1) of the enzyme is greater than TOF (< 1). This can be verified using sucrase in a future study; a larger data set is needed if starch is a substrate in order to reevaluate the models and address statistical concerns. PACS: 87.15.RJ; 87.14.ejKeywords: Aspergillus oryzae alpha-amylase; Catalytic cycle frequency; First-order catalytic rate constant; Gelatinized insoluble potato starch; Turnover number; Turnover number frequency.
Category: Biochemistry
[38] viXra:2604.0040 [pdf] submitted on 2026-04-12 00:30:43
Authors: Daniel Henrique Pereira
Comments: 7 Pages.
A recent work explores the Riemannian geometry of Victoria-Nash Asymmetric Equilibrium (VNAE) manifolds. A fourthu2011order cancellation in the curvature tensor follows from Schwarz's theorem and Riemann antisymmetry. This resolves the higheru2011order complexity that hindered Einstein's nonu2011symmetric field theory. The same mechanism suggests a conceptual parallel with the Navieru2011Stokes regularity problem, though no solution is claimed.
Category: Relativity and Cosmology
[37] viXra:2604.0039 [pdf] submitted on 2026-04-12 00:35:54
Authors: Daniel Henrique Pereira
Comments: 25 Pages.
We develop the complete Riemannian geometry of Victoria—Nash asymmetric equilibrium manifolds (VNAE) for $n$-player games. The metric (g_{ij} = iota_i iota_j delta_{ij} + varepsilon H_{ij}(V,iota)) yields explicit Levi-Civita connection (Gamma^k_{ij}), Riemann tensor (R^i_{,jkl}) with fourth-order $V$-derivative cancellation, Ricci tensor (R_{ij} approx kappabigl(iota_{i,j} iota_i - kappa partial_i^2 iota_ibigr) delta_{ij}), and scalar curvature (K_s approx sum_{i<j} iota_i iota_j det H_{ij}^s + O(varepsilon^2)). Positive/negative/zero signatures classify stability geometrically. The Lyapunov—Morse functional (mathcal{L}) satisfies (frac{d^2}{dt^2}mathcal{L}big|_{mathrm{VNAE}} approx -2 operatorname{Ric}(dot{s}^perp,dot{s}^perp)) along gradient flows, establishing Ricci curvature as the normal contraction rate. Classical Nash, von Neumann’s minimax theorem, and Lyapunov stability emerge as degenerate flat limits as (varepsilonto0).
Category: Geometry
[36] viXra:2604.0038 [pdf] submitted on 2026-04-11 01:43:47
Authors: Harish Chandra Rajpoot
Comments: 38 Pages. (Note by viXra Admin: The references are not listed in a standard/complete manner)
A generalized series-based formulation is developed to determine the hierarchical rank of any given linear permutation selected from a set of all possible linear permutations arranged according to a predefined order of priority of elements like digits, letters, and all other objects. The proposed model applies to permutations of words, numbers, and other discrete objects, enabling systematic identification of their positions in an ordered sequence. The formulation is expressed as a finite series in which each term corresponds to a specific element of the permutation. It applies to sets of distinguishable objects characterized by identifiable properties such as shape, size, colour, or surface pattern, assuming that all elements are equally likely to occupy any position in the arrangement without replacement. The model introduces three parameters, Formerity (F), Permuty (P), and Similarity (S), which collectively define the structure of the series. These parameters depend on the preceding elements, the permutations of successive elements, and the repetition characteristics within the arrangement. Notably, the number of terms in the series is equal to the number of elements in the permutation. This generalized formulation provides a structured and scalable approach for analyzing and ranking linear permutations in a wide range of combinatorial contexts.
Category: Algebra
[35] viXra:2604.0037 [pdf] submitted on 2026-04-11 01:35:55
Authors: Vladimir S. Netchitailo
Comments: 28 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
The object formally designated C/2025 N1 (ATLAS) has been widely discussed as a candidate third interstellar object ("3I/ATLAS") due to its strongly hyperbolic trajectory. In standard celestial mechanics, an interstellar origin is inferred when the original barycentric eccentricity significantly exceeds unity prior to planetary perturbations. This interpretation, however, implicitly assumes that cometary dynamics are governed solely by gravitational forces and conventional outgassing.In this work, we propose an alternative hypothesis: C/2025 N1 (ATLAS) is not interstellar but a Solar System small body originating from the Oort Cloud, consistent with the framework of World—Universe Cosmology (WUC). We argue that its large excess velocity can be explained by a non-gravitational internal acceleration mechanism involving partial conversion of rotational energy of the nucleus into translational kinetic energy.Within WUC, the Universe is structured as a hierarchy of interaction regimes—Macro-world (gravity), Large-world (extremely-weak interaction), Small-world (super-weak interaction), and Micro-world (weak interaction). Previous studies associate Ball Lightning [1] with Solar System Small Body (SB1) and interpret the Tunguska superbolide [2] as an SB2 analogue. Extending this hierarchy, we identify C/2025 N1 (ATLAS) as an SB3 object. This model naturally accounts for its extreme hyperbolic excess velocity without invoking an interstellar origin and leads to specific, testable predictions regarding kinematics, activity, and radiation signatures.We compare these predictions with observations of ʻOumuamua, C/2019 Q4 (Borisov), and a growing population of low-albedo asteroids and "dark comets" exhibiting dust-poor outgassing.
Category: Relativity and Cosmology
[34] viXra:2604.0036 [pdf] submitted on 2026-04-11 01:31:54
Authors: Avyukt Jindal
Comments: 49 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
We analyze superconducting critical temperatures, gap symmetries, and gap-ratio benchmarkswithin the Kinetic Synchronization Cooper (KSC) framework, presenting four advances over previoustreatments. (i) Gap symmetry selected within Hmag: A dual Stoner criterion gives ηAFM = U NF Fnest,Fnest = ln(Wband/kBTc). For YBCO: ηAFM = 1.352 > 1 yields d-wave at Q = (π, π), with gap ratiofactor 4.28/3.528 obtained within the KSC Hmag gap equation and consistent with the canonicald-wave value. For flat-band moire graphene: ηFM = 6.94 ≫ 1 favors nodal pairing; related nodalevidence was reported for MATTG by Park et al. (2026), but this is not a direct MATBG confirmation.(ii) Corrected flat-band moire gap benchmark : fflat = 6.36 (not 9.1; ΩD = kBθD/tJ = 0.01438) andρ = 0.034 (not 0.049; factor 0.695 removed). With cflat = 0.077 fixed against the current flat-bandmoire benchmark window spanned by Oh 2021 (MATBG) and Park 2026 (MATTG), KSC gives5.456 at the class-benchmark evaluation point. We therefore treat the moire entry as a class-levelbenchmark rather than an independent MATBG-only validation. (iii) Two-layer pseudogap T ∗:Layer 1 (Tao—Bend fluctuation formula) reduces BaFe2As2 from 58% to 2% and YBCO from 77%to 5.6%. Layer 2 (SDW condensate softening EeffBend = EBend × fmag) further reduces BaFe2As2 to1.3% and YBCO to 5.2% (using correct T ∗exp: 46 K and 130 K respectively). (iv) Per-atom ZPMhierarchy: Oxygen (16 amu) dominates YBCO (2.93× barium); boron (10.8 amu) dominates MgB2(fatm = 1.103). Overall: mean Tc point error 2.0% (0.82σ), mean T ∗ point error 5.9%; gap-ratiopoint deviations are reported as benchmarks rather than uncertainty bars. The compact formulasbelow are therefore best read as an effective closure, not as a complete microscopic derivation ofevery tabulated benchmark value.
Category: Condensed Matter
[33] viXra:2604.0035 [pdf] submitted on 2026-04-11 01:25:28
Authors: Idan Hackmon
Comments: 8 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
We prove that no covering system with distinct odd moduli can have its least common multiple supported on at most four distinct odd primes (for arbitrary exponents). Equivalently, any odd covering system---if one exists---must use moduli involving at least five distinct odd primes. The proof introduces a weight function method. Moduli are partitioned into prime-power towers and composites; the towers define a "weight region" W in Z/LZ via CRT, and a union bound shows the composites cover at most an R-fraction of W with R = 41/45 < 1 for the worst-case prime set {3,5,7,11}. This leaves at least L/40 integers provably uncovered. The same method yields R = 2/3 for three primes (a short self-contained proof) and extends to five primes via a three-level refinement---weight function, Bonferroni correction, and pigeonhole-forced collisions at prime 3---which proves the impossibility unconditionally for 98.2% of exponent configurations. The remaining 1.8% reduce to a CRT coverage maximality conjecture (NC ≤ 0), for which we provide an analytical proof at k ≤ 3 primes and exhaustive computational verification over 9,000,000+ exact configurations at k = 4 with zero violations.
Category: Number Theory
[32] viXra:2604.0034 [pdf] submitted on 2026-04-10 16:24:18
Authors: Yu. E. Zevatskiy
Comments: 4 Pages.
The operating principle of an engine based on utilizing collision impulses of absolutely stationary particles with bodies of galaxies receding from the epicenter of the Big Bang in accordance with Hubble’s law is determined. A method for detecting radiation from absolutely stationary particles and an engine model converting the collision energy of absolutely stationary particles into mechanical work are proposed. This work may serve as a starting point for research that has never been conducted before.
Category: High Energy Particle Physics
[31] viXra:2604.0033 [pdf] submitted on 2026-04-10 17:33:11
Authors: Vladimir Trifonov, Philip V. Trifonov
Comments: 6 pages, 1 figure, no AI assistance
We propose a cosmological framework in which the observable matter content arises from a measure-theoretic structure rather than additional particle species. The construction distinguishes between a metric (Lebesgue-type) measure and an invariant structural (Haar-type) measure. Their interplay leads to a finite dark-matter distribution, a visible matter component localized near a crossover scale, and a predicted abundance ratio $Omega_{m DM}/Omega_b approx 5.4$ without introducing new particles.
Category: Relativity and Cosmology
[30] viXra:2604.0032 [pdf] submitted on 2026-04-11 01:05:50
Authors: Andrey V. Voron
Comments: 2 Pages.
The article demonstrates the connection between the linear dimensions of the King’s Chamber in the Great Pyramid of Khufu and the dimensions of a unique right-angled triangle, in which the numerical values of the area, the perimeter, and the square of the shorter leg are equal. All dimensions are given in meters.
Category: Archaeology
[29] viXra:2604.0031 [pdf] submitted on 2026-04-11 01:03:35
Authors: Bin Wang
Comments: 5 Pages.
We show that on a complex projective manifold $X$, for $mathbb G=mathbb R$ or $mathbb Q$, a class in $H^{p, p}(X;mathbb Z)otimes mathbb G$ is represented by a particular type of an infinite series of subvarieties.
Category: Geometry
[28] viXra:2604.0030 [pdf] submitted on 2026-04-10 20:07:00
Authors: Ishir Rao
Comments: 8 Pages.
Distinguishing viral proteins from their human host counterparts is a fundamental challenge in computational virology, with direct implications for gene therapy vector design and antiviral therapeutics. We present a systematic comparison of three classification frameworks on 26,771 SwissProt-reviewed sequences (6,350 viral, 20,421 human): a TF-IDF k-mer Random Forest baseline, a logistic regression probe on frozen ESM-2 embeddings [1], and a supervised contrastive learning (SupCon [2]) projection head trained on those same embeddings. The k-mer baseline achieves84% overall accuracy but fails on viral sequences (recall = 40%), while ESM-2 embeddings alone raise accuracy to 98% and viral recall to 96%, confirming that evolutionary pretraining encodes substantial host—viral discriminative signal without any task-specific supervision. Supervised contrastive fine-tuning further improves overall accuracy to 98.69%and viral F1 to 0.97, but the most consequential gains appear among proteins where biology itself is ambiguous: viral sequences that have evolved human-like surface features to evade immune detection show a disproportionate improvement under contrastive training, with mean classification accuracy on host-mimicry proteins rising from 55.5%(k-mers) to 69.4% (ESM-2) to 96.1% (ESM-2 + SupCon) — a 26.7 percentage-point leap attributable directly to the contrastive objective. Manifold analysis via UMAP confirms that SupCon progressively restructures the embedding geometry over training, tightening intra-class cohesion and widening the inter-class margin in precisely the regions where host and viral proteomes overlap most.
Category: Quantitative Biology
[27] viXra:2604.0029 [pdf] submitted on 2026-04-09 09:15:32
Authors: Salvatore Minutoli
Comments: 10 Pages.
We present a comprehensive numerical study of the radion stabilisation mechanism within a higher-dimensional emergent gravity framework. Starting from a ten-dimensional construction, we perform a dimensional reduction to an effective four-dimensional theory containing a radion field $phi(x)$ and a set of internal excitation modes $psi(x)$. The effective potential $V_{text{eff}}(phi,psi)$ is derived, featuring competing power-law terms that naturally admit a stable minimum without fine-tuning. By solving the coupled equations of motion numerically, we demonstrate dynamical stability of the vacuum and extract the radion mass. A systematic parameter scan confirms the robustness of the stabilisation mechanism over a wide range of couplings. We confront our predictions with current LHC data from ATLAS and CMS, showing that the predicted radion mass $m_phi sim mathcal{O}(teV)$ lies in a region partially accessible to Run 2 searches, with further discovery potential at the High-Luminosity LHC. Our results provide strong support for the internal consistency of emergent gravity scenarios and offer a clear phenomenological target for future collider experiments.
Category: High Energy Particle Physics
[26] viXra:2604.0028 [pdf] replaced on 2026-04-13 07:42:00
Authors: Mikhail Batanov-Gaukhman
Comments: 50 Pages.
It is proposed to increase the number of positive and negative Λ-terms in Einstein's vacuum equation to infinity. The solution to this equation leads to a closed Universe filled with a virtually infinite number of "corpuscles" (i.e., stable convex spherical vacuum formations) and "anticorpuscles" (i.e., stable concave spherical vacuum formations) with a hierarchical discrete set of radii. These "corpuscles" and "anticorpuscles" of different scales are nested within one another like Russian dolls. Thus, they form a multitude of hierarchical chains, all beginning with the core of a single largest "corpuscle" (i.e., the mega-Universe) and culminating in a single core of the smallest "corpuscle" (i.e., the instanton). As a result, a closed Hierarchical Cosmological Model was obtained, which allows us to outline the ways of solving many problems of modern physics, such as: baryon asymmetry of the Universe, confinement of quarks, geometrization of electric charge, gravity, dark matter and energy, etc. This article is a development and refinement of the Geometrized Vacuum Physics Based on the Algebra of Signature (GVPh&AS), presented in the articles [3,4,5,6,7,8,9, 10,11,12,13,14,15,16].
Category: Relativity and Cosmology
[25] viXra:2604.0027 [pdf] submitted on 2026-04-09 11:52:26
Authors: L. Martino
Comments: 34 Pages.
In the last decades, energy-based models (EBMs) have become an important class of probabilistic models in which a component of the likelihood is intractable and therefore cannot be evaluated explicitly. Consequently, parameter estimation in EBMs is challenging for conventional inference methods. In this work, we provide a unified framework that connects noise contrastive estimation (NCE), reverse logistic regression (RLR), multiple importance sampling (MIS), and bridge sampling within the context of EBMs. We further show that these methods are equivalent under specific conditions. This unified perspective clarifies relationships among existing methods and enables the development of new estimators, with the potential to improve statistical and computational efficiency. Furthermore, this study helps elucidate the success of NCE in terms of its flexibility and robustness, while also identifying scenarios in which its performance can be further improved. Hence, rather than being a purely descriptive review, this work offers a unifying perspective and additional methodological contributions. The MATLAB code used in the numerical experiments is also made freely available to support the reproducibility of the results.
Category: Statistics
[24] viXra:2604.0026 [pdf] submitted on 2026-04-09 14:27:45
Authors: Adolfo Santa Fe Dueñas
Comments: 15 pages, 8 figures. Also available on arXiv:2604.06917.
The approximately flat outer parts of spiral galaxy rotation curves are commonly interpreted as evidence for a discrepancy between the observed baryonic mass and the dynamical mass inferred from the measured orbital velocities. In most standard analyses, this discrepancy is quantified using the spherical estimate Mdyn = v2R/G, which is exact only under spherical symmetry. However, spiral galaxies are flattened disk systems, for which mass exterior to the galactocentric radius under consideration can contribute non-negligibly to the gravitational field.
We introduce the Lost and Found (LF) model, a geometrically consistent Newtonian framework based on direct full-disk gravitational integration and a parametrized representation of the disk surface density. In this approach, the gravitational field is computed without imposing spherical symmetry, and the disk mass distribution is represented by two exponential components with a smooth outer truncation.
We apply the LF model to a heterogeneous sample of disk galaxies spanning a broad range of masses and radial extents. The model reproduces the main observed features of the rotation curves, including the inner rise and the approximately flat outer behavior, without explicitly invoking a dark matter halo or modifying Newtonian gravity. Across the sample, the LF-inferred mass scales nearly linearly with the conventional dynamical mass, with a characteristic reduction factor of approximately ηLF ≈ 0.67.
These results indicate that part of the inferred mass discrepancy may arise from the geometric treatment of gravitation in disk galaxies, and motivate a reassessment of mass inference in non-spherical systems.
Also available on arXiv:2604.06917.
Category: Astrophysics
[23] viXra:2604.0025 [pdf] submitted on 2026-04-08 21:01:43
Authors: Vladimir Trifonov, Philip V. Trifonov
Comments: 34 Pages. (Note by viXra Admin: For the last time, please submit article written with ANY AI assistance to ai.viXra.org)
We study some applications of hyperhamiltonian quantum mechanics, to the problem of origin of mass in quantum physics and cosmology. It is shown that, within HHQM framework, all matter, ordinary and dark is generated purely geometrically by the two-measure interaction on spacetime’s locally compact Lie group.
Category: Quantum Physics
[22] viXra:2604.0024 [pdf] submitted on 2026-04-08 20:49:18
Authors: A. V. Herrebrugh
Comments: 12 Pages.
Absolutivity theory introduces a foundation of objective reality within a recently (March 2025) developed true 4D space-time model built on universal simultaneity within a continuum of expanding 3D space [08]. Essentially, the theory is a unification in the continuum of asymptotic modified Newtonian Gravity [07] and quantum theory, both embedded in a true 4D orthogonal space-time framework [08]. The Theory foundationally builds on S.M.Poisson's theories for potential fields, leading to Newton's 'inverse square law'; the core function M(r) mandatory requires a build-up of gravitation field starting at M(r) by M(0) = 0 and maximum value M(rc). The extended validity at the zero point of spatial axes eliminates singularities in gravity and predicts i.e. readily identifies suitability of a deterministic quantum theory within the continuum, featuring intrinsic scalability and integrated dynamics in Absolutivity theory. The independent dimensional foundation in the theory contradicts intrinsic space-time deformation/curvature and (mathematically) paves the way for a full deterministic description in quantum theory within the continuum of true 4D spacetime. In this paper as well physical applications are introduced in Absolutivity theory: Black holes, Hawking radiation, Harmonics wave theory and the Mass-gravity property.- Hawking radiation has been predicted from quantum mechanics alone and never received support from a more classical perspective. Here, treatment in Absolutivity predicts the reality of a (black hole size dependent) vacuum state area hidden by the Schwarzschild radius rS in black holes, and supports radiation and escaping mass particles. It does not identify evidence for a complete evaporation of a black hole. In principle the absence of the properties mass-gravity excludes a photon to experience an attractive force in Newton's model. This requires the 'lens' property in gravity for photons to be treated from a different perspective, using Lagrange's stationary action principles. Strong curvature of 'bent' light attributed to a 'dark' matter source [NASA, 09] in (clustered) galaxies is treated in Sec. 2.1 ith support of the phenomenon, however with a surprising and deviating conclusion of photons crossing the Schwarzschild radius twice.- In harmonics, F(x, t) represents the transform of f(x, t) typically a transformed function representing intensity in (line-)space: in the true 4D topology f(x, t) emerges two dimensional, while the four dimensional f(x,y,z,t) mathematically emerges in the unified complex framework of 4D space-time...(Truncated by viXra Admin fewer than 400 words)
Category: Quantum Physics
[21] viXra:2604.0023 [pdf] submitted on 2026-04-08 12:18:49
Authors: Hans Montanus
Comments: 8 Pages.
The successive action of the generators of the full modular group SL(2, Z) on thefundamental domain produces a tessellation of the upper half-plane H. Each tile is acurved triangle whose boundaries are circular arcs. We will analyze the curvatures ofthe boundaries from a flat space point of view. All Euclidean curvatures are integer.We will show that these integer curvatures are either odd or multiples of 8, and thatevery odd number and every multiple of 8 occurs as a curvature.
Category: Number Theory
[20] viXra:2604.0022 [pdf] submitted on 2026-04-08 20:27:44
Authors: Carson Anderson
Comments: 9 Pages.
Admissibility is shown to be necessary for any well-defined comparison, defined by invariance under relabeling, refinement, composition, finite propagation, and closure. From these constraints, relational structure and compositional consistency are forced, uniquely determining the form of admissible scalar comparison. Every alternative introduces dependence on representation, decomposition, or descriptive scale and is therefore excluded.This establishes quadratic invariant structure as a necessary condition of physical description. Its appearance in both general relativity and quantum theory is not contingent, but structurally inevitable. Any theory admitting well-defined scalar comparison must realize this constraint.
Category: Mathematical Physics
[19] viXra:2604.0021 [pdf] submitted on 2026-04-06 20:55:08
Authors: Ahmed Hamid Mahmoud
Comments: 23 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org) https://zenodo.org/records/18748601
The quantum measurement problem—the absence of any dynamical mechanism connecting continuous wavefunction evolution to discrete empirical outcomes—has persisted since the foundations of quantum theory. Decoherence explains interference suppression but cannot explain outcome selection: the diagonal density matrix remains an improper mixture until one outcome is actualized. We propose that collapse is a physical phase transition in the coupled system—apparatus field. The order parameter ψ, constructed as a conditional collective coordinate of apparatus degrees of freedom, evolves under time-dependent Ginzburg—Landau (TDGL) dynamics with a symmetry-breaking potential. When the coherence pressure γ ≡ g √N|⟨Sˆ⟩|/ℏωS exceeds a critical threshold γc = λ/ℏωS, the symmetric phase (superposition) becomes unstable and the field crystallizes into one of the discrete stable minima (eigenstates). We derive the critical coupling from microscopic system—apparatus Hamiltonians,obtaining the scaling gc ∼ N−1/2, which explains why macroscopic apparatus collapse wavefunctions while microscopic interactions preserve coherence. The TDGL dynamics are derived from the microscopic Hamiltonian via the Schwinger—Keldysh path integral, with each step a controlled approximation requiring no modification to the Schr¨odinger equation. The Born rule Pn = |cn|2 is preserved through a dynamical selection mechanism: an equal-basin-volume theorem—proved from the permutation symmetry of the apparatus interaction—ensures that attractor basin geometry under probability-conserving Fokker—Planck flow converts quantum amplitudes into outcome probabilities. We characterize the quantum-classical interface via two complementary classicality criteria and identify a four-stage measurement chain (unitary evolution → decoherence → coarse-graining → phase transition) that resolves the Heisenberg cut dynamically. The theory yields four falsifiable predictions absent from standard quantum mechanics: critical slowing near γc, hysteresis in the collapse—recoherence cycle, metastable supercooled superpositions,and transient Jacobian spikes at the moment of collapse. Quantitative estimates are provided for three experimental platforms—superconducting transmon readout, cavity QED with Rydberg atoms, and optomechanical systems—with explicit falsification criteria. This framework provides a concrete existence proof that collapse dynamics can be constructed from standard quantum mechanics plus statistical mechanics.
Category: Quantum Physics
[18] viXra:2604.0019 [pdf] submitted on 2026-04-06 20:50:27
Authors: Sun Zuodong
Comments: 8 Pages. (Note by viXra Admin: Further repetition will not be accepted)
Many major breakthroughs in modern life sciences display a prominent feature of phenomenon observation preceding structural analysis in the historical timeline. To fill cognitive gaps, classical theoretical systems often rely on auxiliary hypotheses to maintain logical consistency, which inadvertently increases theoretical complexity and violates the principle of simplicity pursued by science. This paper systematically reviews the complete course from the cognition of neuron structure, the discovery of cellular bioelectric phenomena, and the proposal of the sodium-potassium pump hypothesis to the structural elucidation of ion channels and the DNA double helix, clearly identifying the inherent theoretical contradictions caused by the lag in observation timing. On this basis, returning to the fundamental laws of biophysics, this paper proposes the ion channel windmill model for explaining action potentials and the DNA tetramer whole-chain transmission model for interpreting genetic information transfer. The new model system abandons redundant hypotheses, features a more concise logical chain, and puts forward several key experimentally testable predictions, providing a more internally consistent unified theoretical framework for neuroscience and molecular biology.
Category: Biochemistry
[17] viXra:2604.0018 [pdf] submitted on 2026-04-06 20:43:07
Authors: Newton Adhikari
Comments: 10 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
Autonomous navigation of collapsed buildings iscritical for disaster response, yet no standardized simulation benchmark exists for reproducible evaluation of robot navigationand coverage policies in such environments. We present DisasterSim, an open-source benchmark built on ROS 2 Humble and Gazebo Classic that provides a physically realistic post-earthquakebuilding interior with configurable obstacle density, a multimodal sensor suite with Extended Kalman Filter (EKF)-based fusion, four formally defined evaluation metrics with automatedcomputation, and four reference baseline policies. The entire system—environment, robot, SLAM, navigation stack, metrics, and automated experiment runner—executes from a singlecommand with frozen parameters to ensure full reproducibility. Our empirical study across 39 trials reveals a striking result: three fundamentally different classical exploration paradigms—reactive FSM, frontier-based, and potential field—converge to a statistically indistinguishable performance plateau of approximately 30% area coverage (p>0.79, |d|≤0.27). This convergence suggests that navigation constraints, not exploration strategy, form the primary performance bottleneck in cluttered disaster environments. A partially trained goal-conditioned PPO policy(370k of 600k planned steps)—which navigates toward a fixed known survivor location rather than exploring freely—achieves higher incidental coverage (36.9% mean, 61.1% peak, Cohen’sd=0.78), indicating that goal-directed learned navigation traverses more of the environment en route than classical explorers manage in the same time budget. We additionally identify a quantifiable coverage—localization trade-off (Pearson r=0.85, p<0.001), correct a data error present in an earlier draft, and discuss the design of a goal-free RL explorer as the next step toward a fully autonomous learned baseline. All code, configurations, experiment logs, andtrained models are publicly available.
Category: Artificial Intelligence
[16] viXra:2604.0017 [pdf] submitted on 2026-04-06 20:34:17
Authors: Mark Syrkin
Comments: 31 Pages.
Arguing statistical foundations of quantum theory and showing the way it naturally resolves all quantum "mysterious paradoxes". In this context the great deal of attention is given to principles of quantum measurements. To facilitate a better appreciation of quantum mechanical paradigms we provide a brief essay of a quantum field theory and its natural evolution to nonrelativistic quantum mechanics, and then, to classical physics. The presentation aims at both physics students and young scientists, as well as seasoned researchers who will find the discussion of lots of interesting points. The first half contains basics of quantum mechanics and the discussion of main quantum paradoxes.
Category: Quantum Physics
[15] viXra:2604.0016 [pdf] submitted on 2026-04-06 20:34:42
Authors: Mark Syrkin
Comments: 17 Pages.
Arguing statistical foundations of quantum theory and showing the way it naturally resolves all quantum "mysterious paradoxes". In this context the great deal of attention is given to principles of quantum measurements. To facilitate a better appreciation of quantum mechanical paradigms we provide a brief essay of a quantum field theory and its natural evolution to nonrelativistic quantum mechanics, and then, to classical physics. The presentation aims at both physics students and young scientists, as well as seasoned researchers who will find the discussion of lots of interesting points. The second half contains an essay on quantum field theory and the theory of quantum measurements.
Category: Quantum Physics
[14] viXra:2604.0015 [pdf] submitted on 2026-04-06 01:21:07
Authors: A. Zelmer
Comments: 22 Pages. (Note by viXra Admin: An abstract in the article is required and please cite and list scientific references)
For a mathematical sentence to be absolutely precise, it must be formalized. Unfortunately, the usual formalizations become extremely difficult to understand if they are more complex. In this paper we propose a formalization based exclusively on logical operators, providing a general visual explanation.
Category: Set Theory and Logic
[13] viXra:2604.0014 [pdf] submitted on 2026-04-06 01:14:08
Authors: Johan Noldus
Comments: 110 Pages.
In this book, we study the spirit and clearly define Physics and strictly spiritual interactions by means of (an extension of) the notion of free will. The basic physical principles of energy sensation (of the naked self, being immersed in society), feeding and sexual intercourse are discussed and precisely lead to electromagnetism and the dilaton, the weak interactions as well as the strong nuclear interactions respectively with its singlet, doublet or triplet representation,as well as the internal degrees of freedom being spin and color, charge, isocharge and hypercharge of precisely the particles of the super symmetric standard model. An explanation for the emergence of the proton and neutron is provided for as being the only stable compound particles which behave as fundamental particles in nature. Also, the psyche in the sense of Psychology and psychism is studied leading to predictions in medicine, sociology, biology and other fields of human be hehavior.
Category: Mind Science
[12] viXra:2604.0013 [pdf] submitted on 2026-04-04 18:58:40
Authors: Vladimir Trifonov, Philip V. Trifonov
Comments: 2 Pages. 2 figures (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
We present a model of a purely geometric source of Dark Energy (effective cosmological constant) within the framework of Hyperhamiltonian Quantum Mechanics where spacetime acquires a locally compact Lie group structure with two volume measures (Haar, generated by the multiplication on the group, and Lebesgue, induced by the natural closed FLRW metric of the group). Following earlier suggestion by one of the authors that the group-invariant Haar measure may address non-Newtonian behavior at large scales, it is taken to represent vacuum energy density, while Lebesgue is assumed responsible for matter/radiation density. It is shown that Haar vacuum energy mimics dark energy at cosmological scales, generating accelerated expansion geometrically without an explicit cosmological constant. We develop a self-contained linear cosmological perturbation theory to show almost perfect agreement with LambdaCDM.
Category: Relativity and Cosmology
[11] viXra:2604.0012 [pdf] submitted on 2026-04-04 18:05:51
Authors: Bruno R. Galeffi
Comments: 9 Pages.
A detailed examination of relative distances, shapes, dispositions, numerical symbolism, and 3-D perspective of the emblematic Sri Yantra diagram is presented and discussed. In particular, it is found that the emergence of (+) and ( ) vacuum energy densities from a preexisting source, the cosmic expansion, and the occurrence of chemical elements, are all embedded in the diagram. The golden ratio as well.
Category: General Science and Philosophy
[10] viXra:2604.0010 [pdf] submitted on 2026-04-04 00:17:16
Authors: Andrew W. Beckwith
Comments: 2 Pages. Moriond Cosmology for 2026 conference
First, We consider if a generalized HUP set greater than or equal to Planck’s constant divided by the square of a scale factor as well as an inflation field, yield the result that Delta E times Delta t is embedded in a 5 dimensional field which is within a deterministic structure. Our proof ends with Delta t as of Planck time yielding an enormous potential energy, Second, we tie this energy to black hole physics and the early universe. i.e ,Our idea for black hole physics being used for GW generation , is using Torsion to form a cosmological constant. Planck sized black holes allow for a spin density term linked to Torsion.
Category: Quantum Gravity and String Theory
[9] viXra:2604.0009 [pdf] submitted on 2026-04-03 06:03:20
Authors: Abhijit Bhattacharyya
Comments: 9 Pages. Data is available upon requet.
Fast magnetic flux control is important for circuit quantum electrodynamics (cQED) to control qubit precisely. The $3D$ superconducting microwave resonators posses higher volumes turning them to insensitive to surface dielectric losses resulting in higher $Q$ values in comparison to $2D$ resonators which have higher dissipation due to surface losses. Thus $3D$ resonators increase the decoherence time. Although this makes a strong reason to opt for $3D$ superconducting resonators while it is difficult to tune the qubit using fast magnetic field accurately from outside the $3D$ resonator. In this paper, we try to understand transporting the magnetic filed inside a cylindrical superconducting cavity implementing a cylindrical magnetic hose using finite element analysis.
Category: Classical Physics
[8] viXra:2604.0008 [pdf] submitted on 2026-04-03 12:08:36
Authors: Anton Efimov, Evgenia Sivkova
Comments: 2 Pages.
We report a case of structural convergence emerging in temporally isolated musical improvisation. Two musicians performed three simultaneous recording sessions without any real-time communication, each lasting approximately 20 minutes and conducted in geographically distant locations. Despite the absence of interaction, the recordings exhibit consistent alignment across multiple levels, including dynamics, rhythmic structure, coordinated pauses, and convergent pitch regions. We interpret these observations as evidence of shared internal predictive models developed through prior collaboration, rather than direct information exchange. All raw recordings are provided to support reproducibility and independent evaluation.
Category: Mind Science
[7] viXra:2604.0007 [pdf] submitted on 2026-04-03 23:58:54
Authors: Nikola Chachev
Comments: 15 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
We present a new algebraic reformulation of the uniqueness problem for periodic orbits of the Collatz map $c(n) = n/2$ ($n$ even) or $c(n) = 3n+1$ ($n$ odd). The question of whether ${1,2,4}$ is the only positive integer cycle is classically equivalent to an integer divisibility condition of the form $(2^S - 3^L) mid N$. We recast this condition as the vanishing of an explicit integer polynomial --- the cycle polynomial $P_G(t)$ --- evaluated at an arithmetic point $t_0$ of multiplicative order $L$ modulo $D = 2^S - 3^L$. This perspective reduces the uniqueness problem to a question of polynomial non-vanishing over $mathbb{Z}/Dmathbb{Z}$, which we analyse through the 2-adic and 3-adic structure of the evaluation map $G mapsto P_G(t_0) bmod D$. Using this framework we establish two partial results. First, for every mixed valuation sequence $G$ --- one in which the accumulated deviations $varepsilon_i$ take both positive and negative values --- the cycle polynomial satisfies $P_G(t_0) otequiv 0 pmod{D}$ in the special case where exact integer vanishing $A(G) = B(G)$ would be required; this follows from a parity obstruction on 2-adic valuations together with the step-size constraint $G_i geq 1$. Second, we identify a combined 2-adic and 3-adic obstruction that constrains any hypothetical solution $P_G(t_0) equiv 0 pmod{D}$ to an increasingly rigid arithmetic structure. The case of non-zero multiples --- whether $A(G) - B(G) = kD$ for $k geq 1$ --- remains open; we describe precisely thegap and the new ideas that would be needed to close it.
Category: Number Theory
[6] viXra:2604.0006 [pdf] submitted on 2026-04-02 07:21:54
Authors: Olov Nilsson
Comments: 15 Pages.
It is self-evident that no knowledge of or surroundings can exist without observations from us or our fellows. This knowledge can of course be transferred to others, changing the context in which we perceive and understand our universe. I use the idea that both velocity and gravity can be described as a "potential for observation", for a lack of better vocabulary. With the function f(x)=(1/x) and the inverse of the derivative of the Lorentz transformation, both Einsteinian kinetic energy and relativistic gravity can be computed. With the constants c and G, it is possible to construct a version of postnewtonian relativistic gravity, consistent with basic parts of Einstein's theory, but also suggesting a calculation of the expanding universe, deduction of an exact value of the Hubble constant, with no other constants involved than c and G.
Category: Relativity and Cosmology
[5] viXra:2604.0005 [pdf] submitted on 2026-04-02 21:27:57
Authors: Sarthak Agrawal, Sanjeev Saxena
Comments: 22 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
We consider two variations of the classical secretary problem. * A variation of the returning secretary problem where each interviewee may appear a second time with a fixed probability p. The decision-maker observes interviewees sequentially and must choose whether to accept or reject each appearance. We characterize the optimal threshold rule and examine its dependence on the reappearance probability p, highlighting how additional information from repeated appearances improves selection performance.* A variation of the secretary problem in which success is defined as selecting any one of the top three interviewees rather than the single best. Interviewees are observed sequentially in random order, and decisions are irreversible. We estimated the success probability under this relaxed success criterion using the threshold strategy of the classical secretary problem. The results show that allowing selection among the top three significantly increases the success probability and shifts the optimal stopping threshold earlier than in the classical problem. This model provides insight into realistic decision-making scenarios where top interviewees are more or less similar.
Category: Data Structures and Algorithms
[4] viXra:2604.0004 [pdf] submitted on 2026-04-02 21:24:27
Authors: Alberto Coe
Comments: 4 Pages. (Note by viXra Admin: Please cite and list scientific references)
We examine the possibility that selected physical observables may approximately organizearound a discrete half-integer harmonic semilattice generated by the ratio 3/2. Taking the electron rest energy as a reference scale.
Category: Mathematical Physics
[3] viXra:2604.0003 [pdf] submitted on 2026-04-02 13:18:10
Authors: Tianqi Zhu
Comments: 6 Pages.
Recent progress in minimally invasive brain—computer interfaces (BCIs), nanoscale neural interfacing, and multimodal neural decoding has enabled increasingly precise access to and interpretation of human brain activity. This paper analyzes the dual-use risks associated with these technologies when integrated with advanced artificial intelligence and adaptive social engineering methodologies. We formalize a conceptual architecture for "brain-invading systems," which leverage closed-loop neural interaction, personalized modeling, and behavioral manipulation strategies to influence cognitive and affective states. We examine enabling components, including remote-capable neural interfaces and high-fidelity decoding pipelines, and discuss their potential convergence into scalable manipulation frameworks. Key challenges in detecting such systems are evaluated, including signal attribution, adversarial interference, and limitations in current neurodiagnostic methods. We further discuss opportunities in detecting such neuro-AI system for malicious purposes based on EEG signals.
Category: Artificial Intelligence
[2] viXra:2604.0002 [pdf] submitted on 2026-04-02 21:37:03
Authors: Calvin Alexander Grant
Comments: 16 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)
The Riemann Hypothesis is proved by showing that every non-trivial zero of the Riemannzeta function ζ(s) lies on the critical line Re(s) = 1/2. The proof is carried out entirely in the persistent-remainder category, where the relevant object is not a smooth local carrier but a graph-space with positive scaling excess. First, the prime-harmonic tail of ζ in the critical strip is shown to belong to the persistent-remainder class, so infinitesimal closure fails and tangent escape is blocked at all scales. Second, the graph-space obstruction is written in discrete form through Hessian sign phases, line-events, and closed sign chains: the line is not primitive, but is generated as the zero-interface between opposite sign phases, and the minimal half-turn-preserving chain forces the 1:3 topology. Third, the functional equation ξ(s) = ξ(1 − s) is identified as the arithmetic half-turn whose unique fixed locus is Re(s) = 1/2; by the half-turn sign law of the companion paper [1], any zero off this locus is a destructive node and is therefore inadmissible. Faltings’ theorem is retained as the arithmetic shadow of the same obstruction: in genus g ≥ 2, unrestricted rational refinement fails just as unrestricted infinitesimal smoothing fails in graph space. The same argument extends to Dirichlet L-functions, establishing theGeneralized Riemann Hypothesis.
Category: Number Theory
[1] viXra:2604.0001 [pdf] submitted on 2026-04-01 10:49:01
Authors: David Alfyorov
Comments: 18 pages, 4 figures. Also available at doi:10.5281/zenodo.19098042
We compute the complete nonlocal one-loop form factors F1(□/Λ2) and F2(□/Λ2,ξ) of the curvature-squared sector of the spectral action S = Tr f(D2/Λ2) for the full Standard Model particle content: 4 real scalars (Higgs), 45/2 Dirac-equivalent fermions (3 generations), and 12 gauge bosons (SU(3)×SU(2)×U(1)). Using the Barvinsky-Vilkovisky covariant perturbation theory and the Codello-Zanusso diagrammatic heat kernel, we derive closed-form results for each spin sector (0, 1/2, 1) in the Weyl basis and assemble the Standard Model totals. The local limits yield αC = 13/120 for the Weyl-squared coefficient and αR(ξ) = 2(ξ-1/6)2 for the R2 coefficient, where ξ is the Higgs non-minimal coupling. Both form factors are shown to be entire functions, ensuring no additional propagator poles beyond those of the classical theory. We derive the c1/c2 ratio, the scalar graviton decoupling condition at conformal coupling ξ = 1/6, and the UV asymptotic behavior. The form factors yield a modified Newtonian potential with calculable effective masses m2 = Λ(60/13)1/2 and m0 = Λ/(6(ξ-1/6)2)1/2, connecting the spectral action framework to solar-system phenomenology. All results are verified by independent multi-precision numerical evaluation.
Category: Quantum Gravity and String Theory