The AGI Manual
Experiments

Experiments Overview

Practical case studies and experimental results in AGI research

Experiments Overview

Theory must be validated by practice. This section documents key experiments, benchmarks, and case studies performed using the Hyperon, PRIMUS, and MeTTa frameworks.

Why Experiment?

AGI research involves many "hypotheses" about what leads to intelligence. Experiments allow us to:

  • Test the efficiency of different Reasoning Engines.
  • Benchmark the scalability of the AtomSpace.
  • Evaluate the "generality" of an agent across different tasks.

Key Experimental Domains

1. Symbolic Reasoning Benchmarks

Testing the system's ability to solve logic puzzles, prove mathematical theorems, and handle complex ontologies (e.g., Cyc-style reasoning).

2. Neural-Symbolic Integration Tasks

Experiments where a system must look at an image (Neural) and then reason logically about the objects it sees (Symbolic).

3. Evolutionary Discovery

Using MOSES to evolve programs for control tasks, data classification, and game playing.

4. Virtual World Interaction

Placing AGI agents in 3D simulations (like Minecraft or specialized robotics simulators) to test their ability to plan and learn from physical feedback.

Data Transparency

All experiments listed here include links to the original MeTTa scripts and AtomSpace snapshots so that other researchers can replicate the results.


Next: Standard Benchmarks

On this page