Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Paper Copilot
  • OpenReview.net
  • Deadlines
  • CSRanking
  • AI Reviewer: coming soon ...
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
CSPaper

CSPaper: review sidekick

CSPaper AI Reviewer: coming soon ...
  1. Home
  2. Peer Review in Computer Science: good, bad & broken
  3. Artificial intelligence & Machine Learning
  4. 📉 When Math Becomes Performance Art: The AAAI 2023 Pearson Circus

📉 When Math Becomes Performance Art: The AAAI 2023 Pearson Circus

Scheduled Pinned Locked Moved Artificial intelligence & Machine Learning
aaai2023proofpeer-review
3 Posts 3 Posters 147 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • SylviaS Offline
    SylviaS Offline
    Sylvia
    Super Users
    wrote last edited by
    #1

    “Sometimes, a proof doesn’t prove — it just performs.”

    If you thought all math proofs in top AI conferences were solid as a rock, think again. Let me introduce you to an unforgettable gem from AAAI 2023: the paper Metric Multi-View Graph Clustering by Yuze Tan et al., which boldly attempts to re-prove the well-known fact that the Pearson correlation coefficient lies within the range [-1, 1]. The result? A demonstration that would make high school math teachers faint and TikTok comedians proud.


    🤹 The Magical Disappearing Domain

    For the uninitiated: the Pearson correlation coefficient between two vectors is mathematically guaranteed to lie in [-1, 1], thanks to the Cauchy-Schwarz inequality. It’s standard textbook stuff.

    Screenshot 2025-06-01 at 13.05.53.png

    But our AAAI adventurers were not content with dusty math truths. No, they had a mission: to prove it anew. And their chosen method? Assume that one vector is a linear transformation of another, and compute:

    Screenshot 2025-06-01 at 13.06.01.png

    Voilà! They have “proven” that the correlation is either +1 or -1. 🪄

    What happened to all the intermediate values in (-1, 1)? Apparently, they took a break.


    🤔 Wait... What?

    This “proof” ends up asserting that Pearson correlation is always either -1 or 1 if there's a linear dependency, which is… not how range proofs work. It’s a bit like proving “all people are 6 feet tall” by only considering NBA players.


    🐶 Of Dogs and Diagrams

    To illustrate their “linearity-aware metric,” the authors even threw in a circle diagram showing dog breeds and angular distances.

    Screenshot 2025-06-01 at 13.06.13.png

    It’s cute — and makes for a nice graphic — but it doesn’t fix the fact that the underlying proof logic is fundamentally broken.


    🧠 A Lesson in Math (or Lack Thereof)

    This approach confuses a special case (collinear vectors) with a general property. To properly prove boundedness, you need to show it holds across all vector pairs, not just cherry-picked ones.

    It's like proving "all fruits are apples" by only looking at apples.


    🎭 How Did This Pass Peer Review?

    Even more incredibly, this section made it into TKDE, a leading journal. That’s like publishing fanfiction in Nature.

    Possible reasons:

    • Reviewers focused only on clustering results.
    • Theorem was seen as filler, not substance.
    • Reviewer: “Looks mathy. ✅”

    🔍 The Broader Problem with Peer Review

    This paper highlights a systemic issue in AI peer review:

    • Proofs are often unchecked.
    • “Theorem + Corollary + Proof” is seen as a formality.
    • Math becomes aesthetic, not analytic.

    As one Zhihu commenter put it:

    “导师说,要有 theorem,于是就有了。”
    (“The advisor said the paper must have a theorem, so a theorem appeared.”)


    🧩 Final Thoughts

    To be fair, the paper still offers an actual clustering algorithm and decent empirical results. But the proof? It’s math theater. 🧵✨

    Next time you see "Proof 1," brace yourself — you might be watching a circus act.

    1 Reply Last reply
    3
    • H Offline
      H Offline
      Hu8kKo34
      Super Users
      wrote last edited by
      #2

      Lol, it's funny ... I have seen authors make the theory unnecessarily overloaded 😵

      1 Reply Last reply
      0
      • JoanneJ Offline
        JoanneJ Offline
        Joanne
        wrote last edited by
        #3

        😵 😵 😵

        1 Reply Last reply
        0
        Reply
        • Reply as topic
        Log in to reply
        • Oldest to Newest
        • Newest to Oldest
        • Most Votes


        • Login

        • Don't have an account? Register

        • Login or register to search.
        © 2025 CSPaper.org Sidekick of Peer Reviews
        Debating the highs and lows of peer review in computer science.
        • First post
          Last post
        0
        • Categories
        • Recent
        • Tags
        • Popular
        • World
        • Paper Copilot
        • OpenReview.net
        • Deadlines
        • CSRanking
        • AI Reviewer: coming soon ...