Experiment
Age of Scaling
A clicker about money, data, compute, models, and the market

Operations

Money
$0
$0 / hr
Compute (GPUs)
0
0
Training data (B tokens)
0
0 / hr
Best model
None
0 trained
Capability
0.00
Buy compute and data, train a model, then deploy it.

Spend

Acquire datasets, buy compute, and train models. Prices are “gamey-realistic” (inspired by what labs have plausibly done), not meant as exact accounting.

Data acquisition

Compute

Training

Custom training run
Choose parameter count, training data, and training compute. Training uses whatever GPUs remain after serving.
Params (B): Data (B tokens): Total training compute (GPU-hours):
Projected capability: — • Est. time: —

Deploy

Run multiple channels simultaneously. Each channel serves a model from your roster, consuming GPUs to earn revenue. Bigger models need more GPUs per unit of throughput. Enterprise gets GPU priority over API over Free.

Event log