Israel dimensionless constants of the atom. Non-permanent constants

“Let's sum up some results. The reference book "Tables of physical quantities" (M.: Atomizdat, 1976) contains 1005 pages of text and many millions of numbers; how to deal with them?

These quantities are divided into at least four types.

a) Natural units of measurement, or physically marked points of spectra. These are not numbers, but such quantities as G, c, h, m e, e (electron charge). These are the dimensional characteristics of some phenomena that can be reproduced many times, with a high degree accuracy. This is a reflection of the fact that nature replicates elementary situations in huge series. Reflections on the identity of similar building blocks of the universe sometimes led to such profound physical ideas as the Bose-Einstein and Fermi-Dirac statistics. Wheeler's fantastic idea that all electrons are identical because they are instantaneous sections of a world line entangled in a ball of one electron, led Feynman to an elegant simplification of the diagrammatic technique of calculations in quantum field theory.

b) True, or dimensionless, constants. This is the ratio of several marked points on the spectrum of a quantity of the same dimension, for example, the ratio of the masses of electrical particles: we have already mentioned m p / m e . The identification of different dimensions, taking into account the new law, i.e., the reduction of the group of dimensions, leads to the unification of previously different spectra and to the need to explain new numbers.

For example, the dimensions m e , c and h generate the Newton group and therefore lead to the same natural atomic units of the dimensions M, L, T, as well as the Planck units. Therefore, their relationship to the Planck units needs a theoretical explanation. But, as we said, this is impossible as long as there is no (G, c, h)-theory. However, in the (m e, c, h)-theory - quantum electrodynamics - there is a dimensionless quantity, the value of which modern quantum electrodynamics in a certain sense of the word owes its existence to. Let us place two electrons at a distance h/ m e c (the so-called Compton wavelength of an electron) and measure the ratio of the energy of their electrostatic repulsion to the energy m e c 2 equivalent to the rest mass of the electron. You get the number a \u003d 7.2972 x 10 -3 ≈ 1/137. This is the famous fine structure constant.

Quantum electrodynamics describes, in particular, processes in which the number of particles is not conserved: vacuum creates electron-positron pairs, they annihilate. Due to the fact that the production energy (not less than 2m e c 2) is hundreds of times greater than the energy of the characteristic Coulomb interaction (due to the value of a), it is possible to carry out an efficient calculation scheme in which these radiative corrections are not discarded completely, but also not “spoil the life” of the theorist hopelessly.

There is no theoretical explanation for the value of α. Mathematicians have their own remarkable spectra: the spectra of distinguished linear operators-generators of simple Lie groups in irreducible representations, the volumes of fundamental domains, the dimensions of homology and cohomology spaces, etc. limiting the choice. But back to constants.

Their next type, which takes up a lot of space in tables, is:

c) Conversion factors from one scale to another, for example, from atomic to "human". These include: the already mentioned number Avogadro N 0 = 6.02 x 10 23 - essentially one gram, expressed in terms of "proton mass", although the traditional definition is slightly different, as well as things like a light year in kilometers. The most disgusting for the mathematician here, of course, are the conversion factors from one physically meaningless unit to another, just as meaningless: from cubits to feet or from Réaumur to Fahrenheit. Humanly, these are sometimes the most important numbers; as Winnie the Pooh wisely remarked: “I don’t know how many liters, and meters, and kilograms are in it, but tigers, when they jump, seem huge to us.”

d) "Diffuse spectra". This is a characteristic of materials (not elements or pure compounds, but ordinary technological grades of steel, aluminum, copper), astronomical data (the mass of the Sun, the diameter of the Galaxy ...) and many of the same kind. Nature produces stones, planets, stars and galaxies, not caring about their sameness, unlike electrons, but still their characteristics change only within fairly certain limits. Theoretical explanations of these "allowed zones", when they are known, are remarkably interesting and instructive.

Manin Yu.I., Mathematics as a metaphor, M., "MTsNMO Publishing House", 2010, p. 177-179.

Interaction constant

Material from the free Russian encyclopedia "Tradition"

Interaction constant(sometimes the term coupling constant) is a parameter in field theory that determines the relative strength of any interaction between particles or fields. In quantum field theory, interaction constants are associated with vertices in the corresponding interaction diagrams. As interaction constants, both dimensionless parameters and related quantities characterizing interactions and having dimensions are used. Examples are the dimensionless electromagnetic interaction and the electric, measured in C.

  • 1 Comparison of interactions
    • 1.1 Gravitational interaction
    • 1.2 Weak interaction
    • 1.3 Electromagnetic interaction
    • 1.4 Strong interaction
  • 2 Constants in quantum field theory
  • 3 Constants in other theories
    • 3.1 String theory
    • 3.2 strong gravity
    • 3.3 Interactions at the level of stars
  • 4 Links
  • 5 See also
  • 6 Literature
  • 7 Additional links

Comparison of interactions

If we choose an object that participates in all four fundamental interactions, then the values ​​of the dimensionless interaction constants of this object, found from general rule, will show the relative strength of these interactions. The proton is most often used as such an object at the level of elementary particles. The base energy for comparing interactions is the electromagnetic energy of a photon, by definition equal to:

where - , - the speed of light, - the wavelength of the photon. The choice of photon energy is not accidental, since modern science is based on a wave representation based on electromagnetic waves. With their help, all basic measurements are made - length, time, and including energy.

Gravitational interaction

Weak interaction

The energy associated with the weak interaction can be represented in the following form:

where is the effective charge of the weak interaction, is the mass of virtual particles considered to be the carrier of the weak interaction (W- and Z-bosons).

The square of the effective charge of the weak interaction for a proton is expressed in terms of the Fermi constant J m 3 and the mass of the proton:

At sufficiently small distances, the exponential in the energy of the weak interaction can be neglected. In this case, the dimensionless weak interaction constant is defined as follows:

Electromagnetic interaction

The electromagnetic interaction of two immobile protons is described by electrostatic energy:

where - , - .

The ratio of this energy to the photon energy determines the electromagnetic interaction constant, known as:

Strong interaction

At the level of hadrons in the Standard Model of particle physics, it is considered as a "residual" interaction entering into hadrons. It is assumed that gluons, as carriers of the strong interaction, generate virtual mesons in the space between hadrons. In the pion-nucleon Yukawa model, the nuclear forces between nucleons are explained as the result of the exchange of virtual pions, and the interaction energy has the following form:

where is the effective charge of the pseudoscalar pion-nucleon interaction, is the pion mass.

The dimensionless strong interaction constant is:

Constants in quantum field theory

Interaction effects in field theory are often defined using perturbation theory, in which functions in equations are expanded in powers of the interaction constant. Usually, for all interactions, except for the strong one, the interaction constant is much less than unity. This makes the application of perturbation theory efficient, since the contribution from the higher terms of the expansions rapidly decreases and their calculation becomes unnecessary. In the case of a strong interaction, perturbation theory becomes unsuitable and other calculation methods are required.

One of the predictions of quantum field theory is the so-called “floating constants” effect, according to which the interaction constants change slowly with increasing energy transferred during the interaction of particles. So, the electromagnetic interaction constant increases, and the strong interaction constant decreases with increasing energy. Quarks in quantum chromodynamics have their own strong interaction constant:

where is the effective color charge of a quark that emits virtual gluons to interact with another quark. With a decrease in the distance between quarks, achieved in collisions of particles with high energy, a logarithmic decrease and weakening of the strong interaction (the effect of asymptotic freedom of quarks) is expected. On the scale of the transferred energy of the order of the mass-energy of the Z-boson (91.19 GeV) it is found that On the same energy scale, the electromagnetic interaction constant increases to a value on the order of 1/127 instead of ≈1/137 at low energies. It is assumed that at even higher energies, about 10 18 GeV, the values ​​of the constants of the gravitational, weak, electromagnetic and strong interactions of particles will approach each other and may even become approximately equal to each other.

Constants in other theories

String theory

In string theory, interaction constants are not considered constants, but are dynamic in nature. In particular, the same theory at low energies looks like the strings move in ten dimensions, and at high energies - in eleven. A change in the number of measurements is accompanied by a change in the interaction constants.

strong gravity

Together with and electromagnetic forces are considered the main components of the strong interaction in . In this model, instead of considering the interaction of quarks and gluons, only two fundamental fields are taken into account - gravitational and electromagnetic, which act in the charged and massed matter of elementary particles, as well as in the space between them. At the same time, quarks and gluons are assumed not to be real particles, but quasiparticles, reflecting the quantum properties and symmetries inherent in hadronic matter. This approach drastically reduces the record for physical theories number of actually unfounded, but postulated free parameters in the standard model of elementary particle physics, in which there are at least 19 such parameters.

Another consequence is that the weak and strong interactions are not considered independent field interactions. The strong interaction is reduced to combinations of gravitational and electromagnetic forces, in which the interaction delay effects (dipole and orbital torsion fields and magnetic forces) play an important role. Accordingly, the strong interaction constant is determined by analogy with the gravitational interaction constant:

It is useful to understand which constants are fundamental in general. Take, for example, the speed of light. The fact that it is finite is fundamental, not its meaning. In the sense that we have determined the distance and time so that it is like that. In other units, it would be different.

What then is fundamental? Dimensionless ratios and characteristic interaction forces, which are described by dimensionless interaction constants. Roughly speaking, interaction constants characterize the probability of some process. For example, the electromagnetic constant characterizes with what probability an electron will scatter on a proton.

Let's see how we can logically build dimensional quantities. You can enter the ratio of the masses of the proton and electron and a specific constant of the electromagnetic interaction. Atoms will appear in our universe. You can take a specific atomic transition and take the frequency of the emitted light and measure everything in the period of light oscillations. Here is the unit of time. Light during this time will fly some distance, so we get a unit of distance. A photon with such a frequency has some kind of energy, a unit of energy has turned out. And then the strength of the electromagnetic interaction is such that the size of the atom is so much in our new units. We measure distance as the ratio of the time of flight of light through the atom to the period of oscillation. This value depends only on the strength of the interaction. If we now define the speed of light as the ratio of the size of an atom to the period of oscillation, we get a number, but it is not fundamental. The second and the meter are characteristic scales of time and distance for us. In them, we measure the speed of light, but its specific value does not carry physical meaning.

Thought experiment, let there be another universe, where the meter is exactly twice as large as ours, but all the fundamental constants and relationships are the same. Then interactions will take twice as long to propagate, and human-like beings will perceive a second at half the speed. Of course they don't feel it. When they measure the speed of light, they will get the same value as us. Because they measure in their characteristic meters and seconds.

Therefore, physicists do not attach fundamental importance to the fact that the speed of light is 300,000 km/s. And the constant of the electromagnetic interaction, the so-called fine structure constant (it is approximately 1/137) is attached.

Moreover, of course, the constants of fundamental interactions (electromagnetism, strong and weak interactions, gravitation) associated with the corresponding processes depend on the energies of these processes. The electromagnetic interaction on the energy scale of the order of the electron mass is one, and on the scale of the order of the Higgs boson mass, it is different, higher. The strength of the electromagnetic interaction grows with energy. But how the interaction constants change with energy can be calculated by knowing what kind of particles we have and what their property ratios are.

Therefore, in order to fully describe the fundamental interactions at our level of understanding, it is enough to know what set of particles we have, the mass ratios of elementary particles, the interaction constants on one scale, for example, on the scale of the electron mass, and the ratio of forces with which each particular particle interacts this interaction, in the electromagnetic case this corresponds to the ratio of charges (the charge of a proton is equal to the charge of an electron, because the force of interaction of an electron with an electron coincides with the force of interaction of an electron with a proton, if it were twice as large, then the force would be twice as large , the force is measured, I repeat, in dimensionless probabilities). The question comes down to why they are.

Everything is not clear here. Some scientists believe that a more fundamental theory will emerge from which it will follow how masses, charges, and so on are related. The latter is, in a sense, answered by grand unified theories. Some people believe that the anthropic principle is at work. That is, if the fundamental constants were different, we would simply not exist in such a universe.

How unimaginably strange the world would be if physical constants could change! For example, the so-called fine structure constant is approximately equal to 1/137. If it had a different value, then perhaps there would be no difference between matter and energy.

There are things that never change. Scientists call them physical constants, or world constants. It is believed that the speed of light $c$, the gravitational constant $G$, the electron mass $m_e$ and some other quantities always and everywhere remain unchanged. They form the basis on which physical theories are based and determine the structure of the universe.

Physicists are working hard to measure the world's constants with ever-greater accuracy, but no one has yet been able to explain in any way why their values ​​are the way they are. In the SI system $c = 299792458$ m/s, $G = 6.673\cdot 10^(–11)N\cdot$m$^2$/kg$^2$, $m_e = 9.10938188\cdot10^( -31) $ kg - completely unrelated quantities that have only one common property: if they change at least a little, and the existence of complex atomic structures, including living organisms, will be in big question. The desire to justify the values ​​of the constants has become one of the incentives for the development of a unified theory that fully describes all existing phenomena. With its help, scientists hoped to show that each world constant can have only one possible value, due to the internal mechanisms that determine the deceptive arbitrariness of nature.

The best candidate for the title of a unified theory is M-theory (a variant of string theory), which can be considered consistent if the Universe has not four space-time dimensions, but eleven. Therefore, the constants we observe may not actually be truly fundamental. True constants exist in full multidimensional space, and we see only their three-dimensional "silhouettes".

OVERVIEW: WORLD CONSTANTS

1. In many physical equations, there are quantities that are considered constant everywhere - in space and time.

2. Recently, scientists have doubted the constancy of world constants. Comparing the results of observations of quasars and laboratory measurements, they conclude that chemical elements in the distant past they absorbed light differently than they do today. The difference can be explained by a change of several millionths of the fine structure constant.

3. Confirmation of even such a small change will be a real revolution in science. The observed constants may turn out to be only "silhouettes" of the true constants that exist in multidimensional space-time.

Meanwhile, physicists have come to the conclusion that the values ​​of many constants can be the result of random events and interactions between elementary particles in the early stages of the history of the universe. String theory allows for the existence of a huge number ($10^(500)$) of worlds with different self-consistent sets of laws and constants ( see Landscape of String Theory, In the World of Science, No. 12, 2004.). So far, scientists have no idea why our combination was selected. Perhaps, as a result of further research, the number of logically possible worlds will decrease to one, but it is possible that our Universe is only a small part of the multiverse, in which various solutions of the equations of a unified theory are implemented, and we observe just one of the variants of the laws of nature ( see Parallel Universes, In the World of Science, No. 8, 2003). In this case, for many world constants there is no explanation, except that they constitute a rare combination that allows the development of consciousness. Perhaps the universe we observe has become one of many isolated oases surrounded by an infinity of lifeless outer space - a surreal place where forces of nature completely alien to us dominate, and particles like electrons and structures like carbon atoms and DNA molecules are simply impossible. Trying to get there would have been fatal.

String theory was also developed to explain the apparent arbitrariness of physical constants, so its basic equations contain only a few arbitrary parameters. But so far it does not explain the observed values ​​of the constants.

Reliable ruler

In fact, the use of the word "constant" is not entirely legitimate. Our constants could change in time and space. If the extra spatial dimensions changed in size, the constants in our three-dimensional world would change with them. And if we looked far enough into space, we could see areas where the constants took on different values. Since the 1930s scientists have speculated that constants may not be constant. String theory gives this idea theoretical plausibility and makes the search for impermanence all the more important.

The first problem is that the laboratory setup itself can be sensitive to changes in constants. The size of all the atoms could increase, but if the ruler used for measurements also became longer, nothing could be said about the change in the size of the atoms. Experimenters usually assume that the measurement standards (rulers, weights, clocks) are unchanged, but this cannot be achieved when checking constants. Researchers should pay attention to dimensionless constants - simply numbers that do not depend on the system of units of measurement, for example, the ratio of the mass of a proton to the mass of an electron.

Does the internal structure of the universe change?

Of particular interest is the quantity $\alpha = e^2/2\epsilon_0 h c$, which combines the speed of light $c$, the electric charge of the electron $e$, Planck's constant $h$, and the so-called vacuum dielectric constant $\epsilon_0$. It is called the fine structure constant. It was first introduced in 1916 by Arnold Sommerfeld, who was one of the first to try to apply quantum mechanics to electromagnetism: $\alpha$ connects relativistic (c) and quantum (h) characteristics of electromagnetic (e) interactions involving charged particles in empty space ($\epsilon_0$). Measurements have shown that this value is 1/137.03599976 (approximately 1/137).

If $\alpha $ had a different meaning, then the whole world would change. Whether it is less density solid, consisting of atoms, would decrease (in proportion to $\alpha^3 $), molecular bonds would break at lower temperatures ($\alpha^2 $), and the number of stable elements in the periodic table could increase ($1/\alpha $). If $\alpha $ turned out to be too large, small atomic nuclei could not exist, because the nuclear forces binding them would not be able to prevent the mutual repulsion of protons. For $\alpha >0.1 $ carbon could not exist.

Nuclear reactions in stars are especially sensitive to $\alpha $. For nuclear fusion to occur, the star's gravity must create a temperature high enough to cause the nuclei to move closer together, despite their tendency to repel each other. If $\alpha $ were greater than 0.1, then fusion would be impossible (unless, of course, other parameters, such as the ratio of electron and proton masses, remained the same). A change in $\alpha$ by just 4% would affect the energy levels in the core of carbon to such an extent that its occurrence in stars would simply cease.

Implementation of nuclear techniques

The second, more serious, experimental problem is that measuring changes in constants requires high-precision equipment, which must be extremely stable. Even with atomic clocks, the drift of the fine structure constant can only be tracked for a few years. If $\alpha $ changed by more than 4 $\cdot$ $10^(–15)$ in three years, the most accurate clock would be able to detect this. However, nothing of the kind has yet been recorded. It would seem, why not confirmation of constancy? But three years for space is an instant. Slow but significant changes in the history of the universe may go unnoticed.

LIGHT AND PERMANENT FINE STRUCTURE

Fortunately, physicists have found other ways to check. In the 1970s scientists from the French Atomic Energy Commission noticed some features in the isotopic composition of ore from the uranium mine at Oklo in Gabon ( West Africa): it resembled waste from a nuclear reactor. Apparently, about 2 billion years ago, a natural nuclear reactor was formed in Oklo ( see Divine Reactor, In the World of Science, No. 1, 2004).

In 1976, Alexander Shlyakhter of the Leningrad Institute of Nuclear Physics observed that the performance of natural reactors is critically dependent on the precise energy of the specific state of the samarium nucleus that captures neutrons. And the energy itself is strongly related to the value of $\alpha $. So, if the fine structure constant had been slightly different, no chain reaction could have occurred. But it really happened, which means that over the past 2 billion years the constant has not changed by more than 1 $\cdot$ $10^(–8)$. (Physicists continue to argue about exact quantitative results because of the inevitable uncertainty about conditions in a natural reactor.)

In 1962, P. James E. Peebles and Robert Dicke of Princeton University were the first to apply such an analysis to ancient meteorites: the relative abundance of isotopes resulting from their radioactive decay depends on $\alpha $. The most sensitive limitation is associated with beta decay in the conversion of rhenium to osmium. According to recent work by Keith Olive of the University of Minnesota and Maxim Pospelov of the University of Victoria in British Columbia, $\alpha$ differed from its current value by 2 $\cdot$ $10^ at the time the meteorites formed. (–6)$. This result is less accurate than the data obtained at Oklo, but it goes further back in time, to the emergence solar system 4.6 billion years ago.

To explore possible changes over even longer periods of time, researchers must look to the heavens. Light from distant astronomical objects goes to our telescopes for billions of years and bears the imprint of the laws and world constants of those times when it just began its journey and interaction with matter.

Spectral lines

Astronomers got involved in the constants story shortly after the discovery of quasars in 1965, which had just been discovered and identified as bright light sources located at great distances from the Earth. Because the path of light from the quasar to us is so long, it inevitably crosses the gaseous neighborhoods of young galaxies. The gas absorbs quasar light at specific frequencies, imprinting a barcode of narrow lines across its spectrum (see box below).

SEARCHING FOR CHANGES IN QUASAR RADIATION

When a gas absorbs light, the electrons contained in the atoms jump from low energy levels to higher ones. Energy levels are determined by how strongly the atomic nucleus holds electrons, which depends on the strength of the electromagnetic interaction between them and, therefore, on the fine structure constant. If it was different at the time when the light was absorbed, or in some particular region of the universe where it happened, then the energy required to move an electron to a new level, and the wavelengths of the transitions observed in the spectra, should be different from observed today in laboratory experiments. The nature of the change in wavelengths depends critically on the distribution of electrons in atomic orbits. For a given change in $\alpha$, some wavelengths decrease, while others increase. The complex pattern of effects is difficult to confuse with data calibration errors, which makes such an experiment extremely useful.

When we started work seven years ago, we faced two problems. First, the wavelengths of many spectral lines have not been measured with sufficient accuracy. Oddly enough, scientists knew much more about the spectra of quasars billions of light years away than about the spectra of terrestrial samples. We needed high-precision laboratory measurements to compare the spectra of the quasar with them, and we persuaded the experimenters to make the appropriate measurements. They were carried out by Anne Thorne and Juliet Pickering of Imperial College London, and later by teams led by Sveneric Johansson of the Lund Observatory in Sweden, and by Ulf Griesmann and Rainer Kling (Rainer Kling) from the National Institute of Standards and Technology in Maryland.

The second problem was that previous observers used so-called alkaline doublets, pairs of absorption lines that appear in atomic gases of carbon or silicon. They compared the intervals between these lines in the spectra of the quasar with laboratory measurements. However, this method did not allow one specific phenomenon to be exploited: variations in $\alpha $ cause not only a change in the interval between the energy levels of an atom relative to the level with the lowest energy (the ground state), but also a change in the position of the ground state itself. In fact, the second effect is even stronger than the first. As a result, the accuracy of observations was only 1 $\cdot$ $10^(–4)$.

In 1999, one of the authors of the paper (Web) and Victor V. Flambaum of the University of New South Wales in Australia developed a technique to take both effects into account. As a result, the sensitivity was increased by 10 times. In addition, it became possible to compare different kinds atoms (eg magnesium and iron) and perform additional cross checks. Complicated calculations had to be performed to establish exactly how the observed wavelengths vary in different types of atoms. Armed with state-of-the-art telescopes and sensors, we decided to test the persistence of $\alpha$ with unprecedented accuracy using a new method of many multiplets.

Revision of views

When we started the experiments, we simply wanted to establish with greater accuracy that the value of the fine structure constant in ancient times was the same as it is today. To our surprise, the results obtained in 1999 showed small but statistically significant differences, which were subsequently confirmed. Using data from 128 quasar absorption lines, we recorded an increase in $\alpha$ by 6 $\cdot$ $10^(–6)$ over the past 6–12 billion years.

The results of measurements of the fine structure constant do not allow us to draw final conclusions. Some of them indicate that it was once smaller than it is now, and some are not. Perhaps α has changed in the distant past, but has now become constant. (The boxes represent the range of data.)

Bold claims require solid evidence, so our first step was to carefully review our data collection and analysis methods. Measurement errors can be divided into two types: systematic and random. With random inaccuracies, everything is simple. In each individual dimension they take different meanings, which, with a large number of measurements, are averaged and tend to zero. Systematic errors that are not averaged out are more difficult to deal with. In astronomy, uncertainties of this kind are encountered at every turn. In laboratory experiments, instruments can be tuned to minimize errors, but astronomers can't "tune" the universe, and they have to admit that all their data collection methods contain inherent biases. For example, the observed spatial distribution of galaxies is markedly biased towards bright galaxies because they are easier to observe. Identifying and neutralizing such shifts is a constant challenge for observers.

First, we drew attention to the possible distortion of the wavelength scale, relative to which the spectral lines of the quasar were measured. It could arise, for example, during the processing of the "raw" results of the observation of quasars into a calibrated spectrum. Although simple linear stretching or shrinking of the wavelength scale could not exactly mimic the change in $\alpha$, even an approximate similarity would be sufficient to explain the results. Gradually, we eliminated simple errors associated with distortions by substituting calibration data instead of the results of the quasar observation.

For more than two years, we have been investigating various causes of bias to ensure that their impact is negligible. We have found only one potential source of serious bugs. We are talking about magnesium absorption lines. Each of its three stable isotopes absorbs light with different wavelengths, which are very close to each other and are visible in the spectra of quasars as a single line. Based on laboratory measurements of the relative abundance of isotopes, researchers judge the contribution of each of them. Their distribution in the young Universe could be significantly different from today if the stars that emit magnesium were, on average, heavier than their today's counterparts. Such differences could mimic a change in $\alpha$. But the results of a study published this year indicate that the observed facts are not so easily explained. Yeshe Fenner and Brad K. Gibson of Swinburne University of Technology in Australia and Michael T. Murphy of the University of Cambridge concluded that the isotope abundance required to mimic the $\alpha$ change would also lead to an excess synthesis of nitrogen in the early Universe, which is completely inconsistent with observations. So we have to live with the possibility that $\alpha$ did change.

SOMETIMES IT CHANGES, SOMETIMES IT DOESN'T

According to the hypothesis put forward by the authors of the article, in some periods of cosmic history the fine structure constant remained unchanged, while in others it increased. The experimental data (see the previous inset) are consistent with this assumption.

The scientific community immediately appreciated the significance of our results. Researchers of the spectra of quasars around the world immediately took up measurements. In 2003, the research teams of Sergei Levshakov (Sergei Levshakov) from the St. Petersburg Institute of Physics and Technology. Ioffe and Ralf Quast of the University of Hamburg have studied three new quasar systems. Last year, Hum Chand and Raghunathan Srianand of the Inter-University Center for Astronomy and Astrophysics in India, Patrick Petitjean of the Institute of Astrophysics and Bastien Aracil of LERMA in Paris analyzed 23 more cases. None of the groups found changes to $\alpha$. Chand argues that any change between 6 and 10 billion years ago must be less than one millionth.

Why did similar methodologies used to analyze different source data lead to such a drastic discrepancy? The answer is not yet known. The results obtained by these researchers are of excellent quality, but the size of their samples and the age of the analyzed radiation are significantly smaller than ours. In addition, Chand used a simplified version of the multimultiplet method and did not fully evaluate all experimental and systematic errors.

Renowned astrophysicist John Bahcall of Princeton has criticized the multimultiplet method itself, but the problems he points out are in the category of random errors, which are minimized when large samples are used. Bacall, and Jeffrey Newman of the National Laboratory. Lawrence at Berkeley considered emission lines, not absorption lines. Their approach is much less precise, although it may prove useful in the future.

Legislative reform

If our results are correct, the consequences will be enormous. Until recently, all attempts to estimate what would happen to the Universe if the fine structure constant changed were unsatisfactory. They did not go further than considering $\alpha$ as a variable in the same formulas that were obtained under the assumption that it is constant. Agree, a very dubious approach. If $\alpha $ changes, then the energy and momentum in the effects associated with it should be conserved, which should affect the gravitational field in the Universe. In 1982, Jacob D. Bekenstein of the Hebrew University of Jerusalem first generalized the laws of electromagnetism to the case of non-constant constants. In his theory, $\alpha $ is considered as a dynamic component of nature, i.e. like a scalar field. Four years ago, one of us (Barrow), along with Håvard Sandvik and João Magueijo of Imperial College London, expanded Bekenstein's theory to include gravity.

The predictions of the generalized theory are enticingly simple. Since electromagnetism on a cosmic scale is much weaker than gravity, changes in $\alpha$ by a few millionths do not have a noticeable effect on the expansion of the Universe. But the expansion significantly affects $\alpha $ due to the discrepancy between the energies of the electric and magnetic fields. During the first tens of thousands of years of cosmic history, radiation dominated charged particles and maintained a balance between electric and magnetic fields. As the universe expanded, radiation became rarefied, and matter became the dominant element of the cosmos. The electric and magnetic energies turned out to be unequal, and $\alpha $ began to increase in proportion to the logarithm of time. Approximately 6 billion years ago, dark energy began to dominate, accelerating the expansion, which makes it difficult for all physical interactions to propagate in free space. As a result, $\alpha$ became almost constant again.

The described picture is consistent with our observations. The spectral lines of the quasar characterize that period of cosmic history when matter dominated and $\alpha$ increased. The results of laboratory measurements and studies in Oklo correspond to the period when dark energy dominates and $\alpha$ is constant. Of particular interest is the further study of the influence of the change in $\alpha$ on the radioactive elements in meteorites, because it allows us to study the transition between the two named periods.

Alpha is just the beginning

If the fine structure constant changes, then the material objects must fall differently. At one time, Galileo formulated the weak equivalence principle, according to which bodies in a vacuum fall at the same speed, regardless of what they are made of. But changes in $\alpha$ must generate a force acting on all charged particles. The more protons an atom contains in its nucleus, the stronger it will feel it. If the conclusions drawn from the analysis of the results of quasar observations are correct, then the acceleration of free fall of bodies made of different materials should differ by approximately 1 $\cdot$ $10^(–14)$. This is 100 times smaller than what can be measured in the lab, but large enough to show differences in experiments such as STEP (Testing the Equivalence Principle in Space).

In previous studies of $\alpha $, scientists neglected the inhomogeneity of the Universe. Like all galaxies, our Milky Way is about a million times denser than outer space on average, so it is not expanding with the universe. In 2003, Barrow and David F. Mota of Cambridge calculated that $\alpha$ could behave differently within a galaxy than in emptyer regions of space. As soon as a young galaxy condenses and, while relaxing, comes into gravitational equilibrium, $\alpha$ becomes constant inside the galaxy, but continues to change outside. Thus, experiments on Earth that test for the persistence of $\alpha$ suffer from a biased selection of conditions. We have yet to figure out how this affects the verification of the weak equivalence principle. No spatial variations of $\alpha$ have yet been observed. Relying on the homogeneity of the CMB, Barrow recently showed that $\alpha $ does not vary by more than 1 $\cdot$ $10^(–8)$ between regions of the celestial sphere spaced by $10^o$.

It remains for us to wait for the emergence of new data and new studies that will finally confirm or refute the hypothesis about the change in $\alpha $. Researchers have focused on this constant, simply because the effects due to its variations are easier to see. But if $\alpha$ is truly mutable, then other constants must change too. In this case, we will have to admit that the internal mechanisms of nature are much more complicated than we thought.

ABOUT THE AUTHORS:
John Barrow (John D. Barrow) , John Web (John K. Webb) engaged in the study of physical constants in 1996 during a joint sabbatical at the University of Sussex in England. Then Barrow explored new theoretical possibilities for changing constants, and Web was engaged in observations of quasars. Both authors write non-fiction books and often appear on television programs.

Order- the first law of heaven.

Alexander Pop

Fundamental world constants are such constants that provide information about the most general, fundamental properties of matter. These, for example, include G, c, e, h, m e, etc. The common thing that unites these constants is the information they contain. Thus, the gravitational constant G is a quantitative characteristic of the universal interaction inherent in all objects of the Universe - gravitation. The speed of light c is the maximum possible speed of propagation of any interactions in nature. The elementary charge e is the minimum possible value of the electric charge that exists in nature in a free state (quarks with fractional electric charges, apparently, in a free state exist only in a superdense and hot quark-gluon plasma). Constant


The bar h determines the minimum change physical quantity, called action, and plays a fundamental role in the physics of the microworld. The rest mass m e of an electron is a characteristic of the inertial properties of the lightest stable charged elementary particle.

By a constant of some theory, we mean a value that, within the framework of this theory, is considered to be always unchanged. The presence of constants in the expressions of many laws of nature reflects the relative invariance of certain aspects of reality, which is manifested in the presence of regularities.

The fundamental constants c, h, e, G, etc. themselves are the same for all sections of the Metagalaxy and do not change over time, for this reason they are called world constants. Some combinations of world constants determine something important in the structure of objects of nature, and also form the character of a number of fundamental theories.

determines the size of the spatial shell for atomic phenomena (here m e is the electron mass), and

Characteristic energies for these phenomena; quantum for a large-scale magnetic flux in superconductors is given by the quantity

the limiting mass of stationary astrophysical objects is determined by the combination:

where m N is the nucleon mass; 120


the entire mathematical apparatus of quantum electrodynamics is based on the existence of a small dimensionless quantity

determining the intensity of electromagnetic interactions.

An analysis of the dimensions of the fundamental constants leads to a new understanding of the problem as a whole. Individual dimensional fundamental constants, as noted above, play a certain role in the structure of the corresponding physical theories. When it comes to the development of a unified theoretical description of all physical processes, the formation of a unified scientific picture of the world, dimensional physical constants give way to dimensionless fundamental constants such as the role of these

constants in the formation of the structure and properties of the universe is very large. The fine structure constant is a quantitative characteristic of one of the four types of fundamental interactions that exist in nature - electromagnetic. In addition to the electromagnetic interaction, other fundamental interactions are gravitational, strong and weak. Existence of a dimensionless electromagnetic interaction constant

Obviously, it assumes the presence of similar dimensionless constants, which are characteristics of the other three types of interactions. These constants are also characterized by the following dimensionless fundamental constants - the strong interaction constant - weak interaction constant:

where is the Fermi constant

for weak interactions;


gravitational interaction constant:

Numeric values ​​of constants determine

the relative "strength" of these interactions. Thus, the electromagnetic interaction is about 137 times weaker than the strong one. The weakest is the gravitational interaction, which is 10 39 less than the strong one. Interaction constants also determine how fast the transformation of one particle into another in various processes. The electromagnetic interaction constant describes the transformation of any charged particles into the same particles, but with a change in the state of motion plus a photon. The strong interaction constant is a quantitative characteristic of the mutual transformations of baryons with the participation of mesons. The weak interaction constant determines the intensity of transformations of elementary particles in processes involving neutrinos and antineutrinos.

It is necessary to note one more dimensionless physical constant that determines the dimension of the physical space, which we denote by N. It is customary for us that physical events take place in three-dimensional space, i.e. N = 3, although the development of physics has repeatedly led to the emergence of concepts that do not fit into "common sense", but reflecting the real processes that exist in nature.

Thus, the "classical" dimensional fundamental constants play a decisive role in the structure of the corresponding physical theories. From them, the fundamental dimensionless constants of the unified theory of interactions are formed - These constants and some others, as well as the dimension of the space N, determine the structure of the Universe and its properties.