Quantum mechanics has suggested a possible proof of the Riemann hypothesis. Mathematician presented the solution to the Riemann hypothesis

Garden equipment 20.09.2019
Garden equipment

Hello habralyudi!

Today I would like to touch upon such a topic as the "Millennium Challenges", which have been worrying the best minds of our planet for tens, and some hundreds of years.

After the proof of the hypothesis (now the theorem) of Poincaré by Grigory Perelman, the main question that interested many was: “ And what did he actually prove, explain on your fingers?“Taking this opportunity, I will try to explain on my fingers the other tasks of the millennium, or at least approach it from another side, closer to reality.

Equality of classes P and NP

We all remember from school quadratic equations, which are solved through the discriminant. The solution to this problem relates to class P (P olynomial time)- for it there is a fast (hereinafter, the word "fast" is meant as executing in polynomial time) solution algorithm, which is memorized.

There are also NP-tasks ( N on-deterministic P olynomial time), the found solution of which can be quickly checked using a certain algorithm. For example, check by brute-force method by computer. If we go back to the solution quadratic equation, then we will see that in this example the existing solution algorithm is verified as easily and quickly as it is solved. This suggests a logical conclusion that this task applies to both one class and the second.

There are many such tasks, but the main question is whether all or not all tasks that can be easily and quickly checked can be solved easily and quickly as well? Now, for some problems, no fast solution algorithm has been found, and it is not known whether such a solution exists at all.

On the Internet, I also met such an interesting and transparent formulation:

Let's say that you, being in a large company, want to make sure that your acquaintance is also there. If you are told that he is sitting in the corner, then a split second will be enough to, having cast a glance, be convinced of the truth of the information. In the absence of this information, you will be forced to walk around the entire room, examining guests.

V in this case the question is still the same, is there such an algorithm of actions, thanks to which, even without information about where a person is, find him as quickly as if knowing where he is.

This problem has great importance for the most different areas knowledge, but they have not been able to solve it for more than 40 years.

Hodge hypothesis

In reality, there are many, both simple and much more complex geometric objects. Obviously, the more complex the object, the more laborious it becomes to study it. Now scientists have invented and are using an approach with might and main, the main idea of ​​which is to use simple "Bricks" with already known properties that stick together and form its semblance, yes, a designer familiar to everyone from childhood. Knowing the properties of the "bricks", it becomes possible to approach the properties of the object itself.

Hodge's hypothesis in this case is associated with some properties of both "bricks" and objects.

Riemann hypothesis

We all know prime numbers from school that are divisible only by themselves and by one. (2,3,5,7,11...) ... Since ancient times, people have been trying to find a pattern in their placement, but luck has never smiled at anyone. As a result, scientists applied their efforts to the distribution function prime numbers, which shows the number of primes less than or equal to a specific number. For example, for 4 - 2 primes, for 10 - already 4 numbers. Riemann hypothesis just sets the properties of the given distribution function.

Many statements about the computational complexity of some integer algorithms are proven under the assumption that this hypothesis is true.

Young - Mills theory

Quantum physics equations describe the world elementary particles... Physicists Yang and Mills, having discovered the connection between geometry and particle physics, wrote their equations, combining theories of electromagnetic, weak and strong interactions. At one time, the Yang-Mills theory was viewed only as a mathematical sophistication that has nothing to do with reality. However, later the theory began to receive experimental confirmation, but in general view it still remains unresolved.

Based on the Yang-Mills theory, a standard model of elementary particle physics has been built within which the sensational Higgs boson was predicted and recently discovered.

Existence and smoothness of solutions of the Navier - Stokes equations

Fluid flow, air flow, turbulence. These and many other phenomena are described by equations known as Navier - Stokes equations... For some special cases, solutions have already been found, in which, as a rule, parts of the equations are discarded as not affecting the final result, but in general the solutions of these equations are unknown, and it is not even known how to solve them.

Birch - Swinnerton-Dyer hypothesis

For the equation x 2 + y 2 = z 2, Euclid gave Full description solutions, but for more complex equations, the search for solutions becomes extremely difficult, it is enough to recall the history of the proof of the famous Fermat's theorem to be convinced of this.

This hypothesis is related to the description of algebraic equations of degree 3 - the so-called elliptic curves and in fact is the only relatively simple in a general way calculating the rank of one of essential properties elliptic curves.

In proof Fermat's theorem elliptic curves have taken one of the most important places. And in cryptography, they form a whole section of the name themselves, and some of the Russian standards for digital signatures are based on them.

Poincaré's hypothesis

I think, if not all, then most have definitely heard about her. Most often found, including on the central media, such decoding as “ a rubber band stretched over a sphere can be smoothly pulled to a point, but stretched over a donut cannot". In fact, this formulation is valid for Thurston's conjecture, which generalizes Poincaré's conjecture, and which Perelman actually proved.

A special case of Poincaré's hypothesis tells us that any three-dimensional manifold without an edge (the universe, for example) is like a three-dimensional sphere. And the general case translates this statement to objects of any dimension. It is worth noting that the bagel, just like the universe is like a sphere, is like an ordinary coffee mug.

Conclusion

Nowadays, mathematics is associated with scientists who have a strange appearance and talk about equally strange things. Many talk about her isolation from the real world... Many people, both young and quite conscious, say that mathematics is an unnecessary science, that after school / institute, it is nowhere useful in life.

But in reality this is not so - mathematics was created as a mechanism with which one can describe our world, and in particular many observable things. She is everywhere, in every home. As V.O. Klyuchevsky: "It is not the flowers that are to blame that the blind man does not see them."

Our world is far from being as simple as it seems, and mathematics, in accordance with this, also becomes more complicated, improves, providing an ever more solid ground for a deeper understanding of the existing reality.

December 5, 2014 at 06:54 PM

Millennium Challenges. Just about complicated

  • Entertaining tasks,
  • Mathematics

Hello habralyudi!

Today I would like to touch upon such a topic as the "Millennium Challenges", which have been worrying the best minds of our planet for tens, and some hundreds of years.

After the proof of the hypothesis (now the theorem) of Poincaré by Grigory Perelman, the main question that interested many was: “ And what did he actually prove, explain on your fingers?“Taking this opportunity, I will try to explain on my fingers the other tasks of the millennium, or at least approach it from another side, closer to reality.

Equality of classes P and NP

We all remember from school quadratic equations, which are solved through the discriminant. The solution to this problem relates to class P (P olynomial time)- for it there is a fast (hereinafter, the word "fast" is meant as executing in polynomial time) solution algorithm, which is memorized.

There are also NP-tasks ( N on-deterministic P olynomial time), the found solution of which can be quickly checked according to a certain algorithm. For example, check by brute-force method by computer. If we return to solving a quadratic equation, we will see that in this example the existing solution algorithm is checked as easily and quickly as it is solved. This suggests a logical conclusion that this task applies to both one class and the second.

There are many such tasks, but the main question is whether all or not all tasks that can be easily and quickly checked can be solved easily and quickly as well? Now, for some problems, no fast solution algorithm has been found, and it is not known whether such a solution exists at all.

On the Internet, I also met such an interesting and transparent formulation:

Let's say that you, being in a large company, want to make sure that your acquaintance is also there. If you are told that he is sitting in the corner, then a split second will be enough to, having cast a glance, be convinced of the truth of the information. In the absence of this information, you will be forced to walk around the entire room, examining guests.

In this case, the question is still the same, is there such an algorithm of actions, thanks to which, even without information about where a person is, find him as quickly as if knowing where he is.

This problem is of great importance for a wide variety of areas of knowledge, but it has not been solved for more than 40 years.

Hodge hypothesis

In reality, there are many, both simple and much more complex geometric objects. Obviously, the more complex the object, the more laborious it becomes to study it. Now scientists have invented and are using an approach with might and main, the main idea of ​​which is to use simple "Bricks" with already known properties that stick together and form its semblance, yes, a designer familiar to everyone from childhood. Knowing the properties of the "bricks", it becomes possible to approach the properties of the object itself.

Hodge's hypothesis in this case is associated with some properties of both "bricks" and objects.

Riemann hypothesis

We all know prime numbers from school that are divisible only by themselves and by one. (2,3,5,7,11...) ... Since ancient times, people have been trying to find a pattern in their placement, but luck has never smiled at anyone. As a result, scientists have applied their efforts to the distribution function of prime numbers, which shows the number of primes less than or equal to a certain number. For example, for 4 - 2 primes, for 10 - already 4 numbers. Riemann hypothesis just sets the properties of the given distribution function.

Many statements about the computational complexity of some integer algorithms are proven under the assumption that this hypothesis is true.

Young - Mills theory

The equations of quantum physics describe the world of elementary particles. Physicists Yang and Mills, having discovered the connection between geometry and particle physics, wrote their equations, combining theories of electromagnetic, weak and strong interactions. At one time, the Yang-Mills theory was viewed only as a mathematical sophistication that has nothing to do with reality. However, later the theory began to receive experimental confirmation, but in general it still remains unresolved.

Based on the Yang-Mills theory, a standard model of elementary particle physics has been built within which the sensational Higgs boson was predicted and recently discovered.

Existence and smoothness of solutions of the Navier - Stokes equations

Fluid flow, air flow, turbulence. These and many other phenomena are described by equations known as Navier - Stokes equations... For some special cases, solutions have already been found, in which, as a rule, parts of the equations are discarded as not affecting the final result, but in general the solutions of these equations are unknown, and it is not even known how to solve them.

Birch - Swinnerton-Dyer hypothesis

For the equation x 2 + y 2 = z 2, at one time, Euclid gave a complete description of solutions, but for more complex equations, the search for solutions becomes extremely difficult, it is enough to recall the history of the proof of the famous Fermat's theorem to be convinced of this.

This hypothesis is related to the description of algebraic equations of degree 3 - the so-called elliptic curves and in fact is the only relatively simple general way to calculate the rank, one of the most important properties of elliptic curves.

In proof Fermat's theorem elliptic curves have taken one of the most important places. And in cryptography, they form a whole section of the name themselves, and some of the Russian standards for digital signatures are based on them.

Poincaré's hypothesis

I think, if not all, then most have definitely heard about her. Most often found, including on the central media, such decoding as “ a rubber band stretched over a sphere can be smoothly pulled to a point, but stretched over a donut cannot". In fact, this formulation is valid for Thurston's conjecture, which generalizes Poincaré's conjecture, and which Perelman actually proved.

A special case of Poincaré's hypothesis tells us that any three-dimensional manifold without an edge (the universe, for example) is like a three-dimensional sphere. And the general case translates this statement to objects of any dimension. It is worth noting that the bagel, just like the universe is like a sphere, is like an ordinary coffee mug.

Conclusion

Nowadays, mathematics is associated with scientists who have a strange appearance and talk about equally strange things. Many people talk about her isolation from the real world. Many people, both young and quite conscious, say that mathematics is an unnecessary science, that after school / institute, it is nowhere useful in life.

But in reality this is not so - mathematics was created as a mechanism with which one can describe our world, and in particular many observable things. She is everywhere, in every home. As V.O. Klyuchevsky: "It is not the flowers that are to blame that the blind man does not see them."

Our world is far from being as simple as it seems, and mathematics, in accordance with this, also becomes more complicated, improves, providing an ever more solid ground for a deeper understanding of the existing reality.

level 80 developer January 18, 2018 at 01:05 PM

Proof of the Riemann Hypothesis

  • Mathematics

The Riemann Hypothesis is a mathematical hypothesis developed in 1859 by Bernhard Riemann. And which has not yet been resolved.

The Riemann hypothesis sounds like this:

All non-trivial zeros of the zeta function have real part equal to 1/2.
I was able to prove this statement. My conclusions are based on von Koch's 1901 result.

If the Riemann Hypothesis is correct, then

π (x) = Li (x) + Ο (√x ∙ ln x)

The Riemann hypothesis is of great importance in quantum mechanics as well as cryptography.

Formula π (x) and Li (x)

In this section, I will present two formulas with which I proved the Riemann Hypothesis. This is a new formula for the function π (x) and new method integration of the function 1 / ln (x).

The function π (x) shows how many primes there are in a given number x. Prime numbers are numbers that are divisible only by themselves and by one. For example: 2 3 5 7 ...

Formula of the function π (x) .:

(1.1)
Proof:

This formula excludes from the given number x all non-prime numbers, according to the rules of the sieve of Eratosthenes. The Sieve of Eretosthenes is a method invented by Eratosthenes of Cyrene to determine a sequence of primes. The algorithm is as follows if we take a series from natural numbers without unit

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18…

And exclude from it all even numbers, except for the smallest of them, i.e. deuces, you get:

2 3 5 7 9 11 13 15 17…

And then from this resulting sequence, exclude all numbers that are divisible by the next prime number after two, this is the number 3, not counting itself. It will turn out:

2 3 5 7 11 13 17…

If you do this ad infinitum, then only prime numbers will remain. My formula works like this. First, the formula excludes one from a given number x, and then the number of all even numbers except 2. Then the number of numbers that are divisible by 3, except for three, and even numbers that are divisible by 3, etc. are excluded from this number.
fn (x) stands for the most minimum number, which must be excluded from x to get the number that is divisible by n without a remainder.

The graph of the function fn (x):


Fig. (1.1) Graph of the function fn (x)

Function scope

Value range

Each expression in parentheses contains the number of defined non-prime numbers not exceeding x.

The sooner or later defined expression in parentheses of the formula π (x) will be equal to zero (1.1). Therefore, this amount is not infinite.

I cannot prove formula (1.1) mathematically, but one can understand that the formula is correct, assuming that its function resembles the sieve of Eretosthenes. We can say that this formula is an analytical version of the sieve of Eretosthenes.

Formula of the function Li (x):

(1.2)
Proof:

All terms of this sum are the area of ​​the rectangle under the graph of the function 1 / ln (x), an infinite number of areas of the rectangles converge to the area under the graph of the function 1 / ln (x), starting from argument 2. And since the function Li (x) is the integral of the graph function 1 / ln (x), then formula (1.2) is equal to Li (x).


Fig. (1.2) Rectangles under the graph of the function 1 / ln (x)

The upper right corner of all rectangles lie at a certain point on the graph, and since there are infinitely many rectangles, the corners of the rectangles cover all points of the graph from 1 / ln (2) to 1 / ln (x).

Proof

So, if the Riemann hypothesis is true then

π (x) = Li (x) + Ο (√x ∙ ln x)

And if you redo this expression, it turns out that

That is, if we prove this inequality, then it turns out that the Riemann hypothesis is true.
Substituting the derived formulas into the inequality, we get:


(1.3) Remainder term

Provided that x> 2, we transform this expression, for simplicity.

From this we can conclude that if the inequality


(1.5)

True, then the Riemann hypothesis is also true. Let's check it out. If we transfer all terms of inequality (1.5) to right side inequality, we get


(1.6)

The first difference of this expression, for x> 2, is always negative. And the second difference is approximately negative only for x> 10, but this is not scary, since we are only interested in large arguments, expression (1.6) will still be correct.

Inequality (1.6) is true, hence the inequality

Also true.

The Riemann hypothesis is proven.

Tags: Millennium Challenges, prime numbers

I wanted to talk in more detail about Henri Poincaré's recently proven hypothesis, but then I decided to “expand the problem” and tell “everything” in a concise form. So, the Clay Mathematical Institute in Boston in 2000 identified the "seven problems of the millennium" and awarded a million dollars for each of them. Here they are:

1. Poincaré's hypothesis
2. Riemann hypothesis
3. Navier-Stokes equation
4. Cook's hypothesis
5. Hodge hypothesis
6. Yang-Millis theory
7. Birch-Swinnerton-Dyer hypothesis

We will talk about the Poincaré hypothesis next time, now in general outline let's talk about other problems

Riemann hypothesis (1859)

Everyone knows what prime numbers are - they are numbers divisible by 1 and by themselves. Those. 1, 2, 3, 5, 7, 11, 13, 17, 19, etc. But what is interesting is that it has not yet been possible to identify any regularity in their placement.
Thus, it is believed that in the vicinity of an integer x the average distance between successive primes is proportional to the logarithm of x. Nevertheless, the so-called paired primes have long been known (twin primes, the difference between which is 2, for example 11 and 13, 29 and 31, 59 and 61. Sometimes they form whole clusters, for example 101, 103, 107, 109 and 113. If such clusters are found in the region of very large primes, then the strength of the cryptographic keys currently in use may overnight become a very big question.
Riemann proposed his own version, which is convenient for identifying large primes. According to him, the nature of the distribution of primes may differ significantly from what is currently assumed. Riemann found that the number P (x) of primes not exceeding x is expressed in terms of the distribution of the nontrivial zeros of the Riemann zeta function Z (s). Riemann put forward a conjecture, not proven and not refuted so far, that all nontrivial zeros of the zeta function lie on the straight line R (z) = (1/2). (Sorry, but I don't know how to change the encoding to show Greek letters).
In general, by proving the Riemann hypothesis (if at all possible) and choosing the appropriate algorithm, it will be possible to break many passwords and secret codes.

Navier-Stokes equation. (1830)

Nonlinear diffur describing thermal convection of liquids and air flows. It is one of the key equations in meteorology.

p - pressure
F - external force
r (ro) - density
n (nu) - viscosity
v - complex speed

Probably, its exact analytical solution is interesting from a purely mathematical point of view, but approximate solution methods have existed for a long time. As usual in such cases, the nonlinear diffura is divided into several linear ones; another thing is that the solutions of the system of linear diffurs turned out to be unusually sensitive to the initial conditions. This became apparent when, with the introduction of computers, it became possible to process large amounts of data. So in 1963, the American meteorologist from the Massachusetts Institute of Technology Edward Lorenz asked the question: why the rapid improvement of computers did not lead to the realization of the dream of meteorologists - a reliable medium-term (2-3 weeks ahead) weather forecast? Edward Lorenz proposed the simplest model, consisting of three ordinary differential equations, describing air convection, calculated it on a computer and received an amazing result. This result - dynamic chaos - is a complex non-periodic motion with a finite forecast horizon in deterministic systems (that is, in those where the future is uniquely determined by the past). Thus, a strange attractor was discovered. The reason for the unpredictability of the behavior of this and other similar systems lies not in the fact that the mathematical theory of the existence and uniqueness of a solution for given initial conditions is not true, but in the extraordinary sensitivity of the solution to these initial conditions. Close initial conditions with time lead to a completely different final state of the system. Moreover, the difference often grows exponentially with time, that is, extremely quickly.

Cook's hypothesis (1971)

How quickly can you check a specific answer - here unsolved problem logic and computer calculations! It was formulated by Stephen Cook in the following way: “can the verification of the correctness of the solution of the problem be more time-consuming than obtaining the solution itself, regardless of the verification algorithm?”. A solution to this problem could revolutionize the fundamentals of cryptography used in the transfer and storage of data and advance the development of the so-called algorithm. "Quantum computers" which again will help in speeding up the algorithm for solving problems associated with brute-force code (for example, the same password cracking).
Let a function of 10000 variables be given: f (x 1 ... x 10000), for simplicity we assume that the variables can take values ​​0 or 1, the result of the function is also 0 or 1. There is an algorithm that calculates this function for any given set of arguments in a fairly short time (for example, for t = 0.1 sec).
It is required to find out whether there is a set of arguments for which the value of the function is 1. In this case, the set of arguments itself, on which the function is equal to 1, is not of interest to us. We just need to know if he is or not. What we can do? The simplest thing is to take and stupidly iterate over the entire sequence from 1 to 10000 in all combinations, calculating the value of the function on different sets. In the worst case, we will spend 2 tN or 2 1000 seconds on this, which is many times the age of the Universe.
But if we know the nature of the function f, then
one can reduce the enumeration by discarding the sets of arguments for which the function is a priori equal to 0. For many real problems this will allow solving them in a reasonable time. At the same time, there are problems (the so-called NP-complete problems) for which, even after reducing the enumeration, the total solution time remains unacceptable.

Now for the physical side. It is known that the quantum
can be in state 0 or 1 with some probability. And what's interesting is that you can find out in which of the states it is:

A: 0 with probability 1
B: 1 with probability 1
C: 0 with probability p, 1 with probability 1-p

The essence of calculations on a quantum computer is to take 1000 quanta in state C and feed them to the input of the function f. If a quantum in state A is received at the output, this means that on all possible sets f = 0. Well, if the output is a quantum in the state
B or C, this means that there is a set on which f = 1.
Obviously. that a "quantum computer" will significantly speed up the tasks associated with enumerating data, but will be ineffective in terms of speeding up the writing or reading of data.

Yang-Mills theory

This is probably the only one of the seven outlined questions that have truly fundamental significance. Its solution will significantly advance the creation of a "unified field theory", i.e. identifying a deterministic relationship between four known types of interactions

1. Gravitational
2. Electromagnetic
3. Strong
4. Weak

In 1954, Yang Zhenning (a representative of the yellow root race) and Robert Mills proposed a theory according to which electromagnetic and weak interactions were combined (Glashow, Weinberg, Salam - Nob. Prize 1979). Moreover, it still serves as the basis quantum theory fields. But here the mathematical apparatus has already begun to fail. The point is that “quantum particles” behave quite differently from “big bodies” in Newtonian physics. And although there are common moments, for example, a charged particle creates an electromagnetic field, and a particle with a nonzero mass - a gravitational one; or, for example, a particle is equivalent to the set of fields that it creates, because any interaction with other particles is done through these fields; from the point of view of physics, considering the fields generated by a particle is the same as considering the particle itself.
But this is so to speak "in the first approximation."
In the quantum approach, one and the same particle can be described by two different ways: as a particle with a certain mass and as a wave with a certain length. A single particle-wave is described not by its position in space, but by a wave function (usually denoted as Y), and its location has a probabilistic nature - the probability of finding a particle at a given point x at the given time t equals Y = P (x, t) ^ 2. It would seem nothing unusual, but at the level of microparticles the following "unpleasant" effect occurs - if several fields act on a particle at once, their combined effect can no longer be decomposed into the action of each of them one by one, classical principle superposition doesn't work. This happens because in this theory, not only particles of matter are attracted to each other, but also lines of force fields. Because of this, the equations become nonlinear and the entire arsenal of mathematical tricks for solving linear equations you cannot apply to them. Finding solutions and even proving their existence is becoming an incomparably more difficult task.
That is why it is probably impossible to solve it "head-on", in any case, the theorists have chosen a different path. So, based on the findings of Young and Mills, Murray Gell-Mann built the theory of strong interaction (Nob. Prize).
The main feature of the theory is the introduction of particles with fractional electric charge- quarks.

But in order to mathematically "tie" the electromagnetic, strong and weak interactions to each other, three conditions must be met:

1. The presence of a "gap" in the mass spectrum, in English - mass gap
2. Quark confinement: quarks are locked inside hadrons and, in principle, cannot be obtained in free form
3. Violations of symmetry

Experiments have shown that these conditions are met in real life, but there is no rigorous mathematical proof. Those. in fact, it is necessary to adapt the Ya-M theory to a 4-dimensional space with three aforementioned properties. For me, this is a much more than a million task. And although not a single decent physicist doubts the existence of quarks, it was not possible to detect them experimentally. It is assumed that on a scale of 10 -30 between the electromagnetic, strong and weak interactions any difference is lost (the so-called "Great Unification"), another thing is that the energy required for such experiments (more than 10 16 GeV) cannot be obtained at accelerators. But don't worry - checking the Great Unification is a matter of the coming years, unless, of course, any redundant problems fall on humanity. Physicists have already developed a verification experiment related to the instability of the proton (a consequence Ya-M theory). But this topic is beyond the scope of our post.

Well, let's remember that this is not all. The last bastion remains - gravity. We really do not know anything about it, except that “everything is attracted” and “space-time is bent”. It is clear that all the forces in the world are reduced to one superpower or, as they say, "Superunification". But what is the principle of superunification? Alik Einstein believed that this principle is geometric, like the principle of general relativity. It may well be. Those. physics at the most basic level is just geometry.

Birch and Swinnerton-Dyer hypothesis

Remember the Great Farm Theorem, which was allegedly proven by some Ingliz in 1994? It took 350 years! So now the problem has been continued - you need to describe all solutions in integers
x, y, z algebraic equations, that is, equations in several variables
with integer coefficients. An example algebraic equation is the equation
x 2 + y 2 = z 2. Euclid gave a full description
solutions of this equation, but for more complex equations obtaining the solution
becomes extremely difficult (for example, proving the absence of integers
solutions of the equation x n + y n = z n).
Birch and Swinnerton-Dyer assumed that the number of solutions is determined by the value of the zeta function ζ (s) associated with the equation at point 1: if the value of the zeta function ζ (s) at point 1 is 0, then there are an infinite number of solutions, and vice versa, if not equal to 0, then there are only a finite number of such solutions. Here the problem, by the way, has something in common with the Riemann hypothesis, only there the distribution of nontrivial zeros of the zeta function ζ (s)

Hodge hypothesis
Probably the most abstract topic.
As you know, to describe the properties of complex geometric objects, their properties are approximated. Well, for example, a ball (although it is quite simple) can be represented as a surface consisting of small squares. But if there are more complex surfaces, then the question arises, to what extent can we approximate the shape of a given object by gluing together simple bodies of increasing dimension? This method proved to be effective in describing various objects found in mathematics, but in some cases it was necessary to add parts that did not have any geometric interpretation.
I looked through the abstruse book of Gelfand-Manin on this topic, it describes the Hodge theory for smooth non-compact formations, but to be honest, I did not understand much, I don’t really understand analytic geometry in general. There, the point is that integrals over some cycles can be calculated through residues, and modern computers are good at this.
The Hodge conjecture itself is that for some types of spaces, called projective algebraic varieties, the so-called. Hodge cycles are combinations of objects that have a geometric interpretation - algebraic cycles.

Mathematical physicists have announced progress on a 150-year-old theorem, for which the Clay Mathematical Institute is offering a million-dollar award. Scientists have presented an operator that satisfies the Hilbert-Poya conjecture, which states that there is a differential operator whose eigenvalues correspond exactly to the nontrivial zeros of the Riemann zeta function. The article was published in the journal Physical Review Letters.

The Riemann hypothesis is one of the "Millennium Problems" for the proof of which the American Clay Mathematical Institute awards a million dollars. The Poincaré conjecture (the Poincaré-Perelman theorem), which was proved by our compatriot, was included in this list. The Riemann hypothesis, formulated in 1859, states that all non-trivial zeros of the Riemann zeta function (that is, the values ​​of the complex argument that make the function zero) lie on the line ½ + it, that is, their real part is ½. The zeta function itself appears in many branches of mathematics, for example, in number theory, it is associated with the number of primes less than a given one.

Function theory predicts that the set of nontrivial zeros of the zeta function should be similar to the set of eigenvalues ​​("solutions" for matrix equations) some other function from the class of differential operators that are often used in physics. The idea of ​​the existence of a specific operator with such properties was called the Hilbert-Poya conjecture, although neither one nor the other published papers on this topic. “Since there are no“ authors ”publications on this topic, the formulation of the hypothesis changes depending on the interpretation,” explains one of the authors of the article, Dorje Brody of Brunel University in London. - However, two points must be fulfilled: a) it is necessary to find an operator whose eigenvalues ​​correspond to non-trivial zeros of the zeta function, and b) to determine that the eigenvalues ​​are real numbers. The main goal of our work was point a). Further work is needed to prove part b). ”

Another important hypothesis in this area is the idea of ​​Berry and Keating that if the desired operator exists, it will theoretically correspond to some quantum system with certain properties. “We defined the conditions for quantizing the Berry-Keating Hamiltonian, thus proving a hypothesis for their name,” adds Brody. - Perhaps it will be disappointing, but the resulting Hamiltonian does not seem to correspond to any physical system in an obvious way; at least we did not find such a match. "

The greatest difficulty is proving the validity of the eigenvalues. The authors are optimistic about this, with a supporting argument based on PT symmetry. This idea from particle physics means that by replacing all directions of four-dimensional spacetime with the opposite, the system will look the same. In general, nature is not PT-symmetric; however, the resulting operator has this property. As shown in the article, if we prove the violation of this symmetry for the imaginary part of the operator, then all the eigenvalues ​​will be real, thus completing the proof of the Riemann hypothesis.

Recommended to read

To the top