P vs. NP: How researchers plan to answer the last question in computer science

P vs. NP: How researchers plan to answer the last question in computer science

Monday, July 19, 2021 was one of those days again. A day in the middle of the second pandemic summer when everything seems to go wrong. At least that’s what the tweet reads from a leading computer scientist who actually does research in the field of complexity theory, but who is now complaining violently about the administrative mess at a leading specialist journal – and ends his tweet with a very charged “Happy Monday”. This day might actually have been a happy Monday – in some parallel universe. Because on that day, a paper appeared in the online edition of the respected journal “ACM Transactions on Computational Theory”, which deals with “outstanding research on the limits of feasible calculations”, which allegedly contained a solution to the problem of all problems – the Holy Grail of theoretical computer science.


Technology Review 2/2022 in the heise shop

)

We look at our economy and how important the concept of the circular economy is to overcome previous practices and processes. You can read that and more in the new issue, which will be available from February 17th. is on the market and from 16.2. can be conveniently ordered from the heise shop. Highlights from the magazine:

This problem – known as “P versus NP” – is considered to be the most important problem in theoretical computer science. A bounty of one million dollars is offered for his solution. It addresses key questions about the promises, limitations, and ambitions of computing: What problems can computers realistically solve? How much time will they need for this? And why are some problems more difficult than others? What does “difficult” actually mean?

Specifying an absolute time for calculations makes little sense. Because the speed of a calculation naturally depends on the computing power of the computer, but also on the size of the problem itself: it takes longer to sort a list with 1,000 elements than a list with 100 elements. Computer scientists divide calculations according to how the calculation time increases with the number of variables to be calculated. “P” stands for problems that a computer can easily solve. More precisely, P stands for “polynomial”. This means that the dependence of the calculation time can be described as a polynomial – the sum of powers – of the number of variables. “NP” stands for “Nondeterministic Polynomial”. These are problems that are difficult to solve, where the computing time increases exponentially, for example. At the same time, their solution is relatively easy to check: Such calculations work like trapdoors or gears with ratchets: They can be moved easily in one direction, in the other only against extreme resistance. You take a possible solution and calculate whether it really works – like with jigsaw puzzles or Sudoku puzzles.

That sounds very abstract, but many NP problems are technically and scientifically very relevant – like the question of which prime factors a number can be broken down into. Much of the encryption on the Internet is based on the fact that the computational effort for this problem grows exponentially with the size of this number. The million-dollar question posed by P vs. NP is this: are these two classes of problems one and the same? That is, could the problems that seem so difficult actually be solved with an algorithm in a reasonable amount of time—at least in principle? If only the right, fiendishly fast algorithm could be found?

If P=NP, sooner or later someone could find a solution for prime number decomposition, tearing apart the entire cryptosphere – without a quantum computer. But there would also be gigantic opportunities: protein folding, a 50-year-old major challenge in biology, would become easier to tackle, opening up entirely new possibilities for developing drugs to cure or treat diseases and discovering enzymes to break down industrial waste open. It would also mean finding optimal solutions to difficult everyday problems, such as planning a road trip to reach all destinations in the shortest possible travel time, or seating wedding guests so that only friends are at the same table.


With his research, computer scientist Stephen Cook laid the foundations of complexity theory.  Now his son is to continue the work., Photo: The University of Toronto

With his research, computer scientist Stephen Cook laid the foundations of complexity theory.  Now his son is to continue the work., Photo: The University of Toronto

With his research, computer scientist Stephen Cook laid the foundations of complexity theory. Now his son is to continue the work.

Ever since the P vs. NP problem was first formulated around 50 years ago, researchers around the world have been trying to find a solution. But myself Stephen Cook from the University of Toronto, who laid the foundation for the field of computer complexity with a pioneering work in 1971, was overwhelmed with his search. He received the Turing Award, the equivalent of the Nobel Prize in computer science, for his work. But Cook also admits that he never had a good idea how to crack the problem – “it’s just too difficult”.



To home page

Leave a Comment

Your email address will not be published. Required fields are marked *