1. Introduction

Representational Convergence!

“There is indeed an endpoint to this convergence and a principle that drives it”

Screenshot from 2024-11-05 13-23-35.png

서로 다른 모델들이 이러한 representation of reality 에 trying to arrive!

→ representation of the joint distribution over events in the word that generate the data we observe.

Figure → textual description, image는 일종의 projection!

Representation Learning은 이러한 다양한 measurements & projections 를 statistically 하게 modeling 하는 vector embedding을 찾는 과정

vector embedding 은 모두 이러한 underlying reality Z 에서 유도되며, 모델이 더 많은 데이터를 학습할수록 Z에 대해 더 많은 information 을 알게 되고, 따라서 Z에 대해 align 되는 것은 convergent point를 향해 수렴하는것을 의미한다.

Anna Karenina scenario

“All well-performing neural nets represent the world in the same way”

→ 이러한 가능성에 대한 증거: Section 2

The platonic representation hypothesis

→ Situation where we are in an Anna Karenina scenario and the “happy representation” that is converged upon is one that reflects a statistical model of the underlying reality.

→ 이러한 statistical model의 potential nature: Section 4

2. Representations are Converging

Preliminaries: representations을 vector embedding으로 제한