Comrade Ceasefire
Simmer slowly
June 24, 2024
The Question of What’s Fair Illuminates the Question of What’s Hard
Computational complexity theorists have discovered a surprising new way to understand what makes certain problems hard.
Theoretical computer scientists deal with complicated ideas. But whenever possible, they’d prefer to work with simpler ones. A 2009 tool known as the regularity lemma gives them a great way to do this. It effectively lets them break a given computational problem or function into simpler pieces.
For computational complexity theorists, who study the relative hardness of different problems, this ability to simplify has long helped them understand unwieldy mathematical functions. But certain problems with complex pieces have still defied analysis.
Now, new work provides a way to analyze those hard-to-understand problems. The advance comes from an unexpected area of computer science: algorithmic fairness, where algorithms like those used by banks and insurance companies are scrutinized to make sure they treat people fairly. The new results show that the fairness tools can effectively map out the different parts of a hard problem and isolate the precise regions of the problem that make it hard to solve.
“It’s really a fantastic work. I think it’s super exciting,” said Michael Kim, a computer scientist at Cornell University who helped build one of the fairness tools that was repurposed in the new work. “As a theorist who’s working in these spaces, it’s kind of an ideal outcome that somebody takes your work from one area and applies it to something else.”
More at:
The Question of What’s Fair Illuminates the Question of What’s Hard
Computational complexity theorists have discovered a surprising new way to understand what makes certain problems hard.
Theoretical computer scientists deal with complicated ideas. But whenever possible, they’d prefer to work with simpler ones. A 2009 tool known as the regularity lemma gives them a great way to do this. It effectively lets them break a given computational problem or function into simpler pieces.
For computational complexity theorists, who study the relative hardness of different problems, this ability to simplify has long helped them understand unwieldy mathematical functions. But certain problems with complex pieces have still defied analysis.
Now, new work provides a way to analyze those hard-to-understand problems. The advance comes from an unexpected area of computer science: algorithmic fairness, where algorithms like those used by banks and insurance companies are scrutinized to make sure they treat people fairly. The new results show that the fairness tools can effectively map out the different parts of a hard problem and isolate the precise regions of the problem that make it hard to solve.
“It’s really a fantastic work. I think it’s super exciting,” said Michael Kim, a computer scientist at Cornell University who helped build one of the fairness tools that was repurposed in the new work. “As a theorist who’s working in these spaces, it’s kind of an ideal outcome that somebody takes your work from one area and applies it to something else.”
More at:
The Question of What’s Fair Illuminates the Question of What’s Hard | Quanta Magazine
Computational complexity theorists have discovered a surprising new way to understand what makes certain problems hard.
www.quantamagazine.org