{"id":14,"url":"https://pm.philipcastiglione.com/papers/14.json","title":"Universal Intelligence: A Deļ¬nition of Machine Intelligence","read":false,"authors":"Shane Legg, Marcus Hutter","year":2007,"auto_summary":"**Universal Intelligence: A Definition of Machine Intelligence**\n\n**Authors:** Shane Legg and Marcus Hutter\n\n**Abstract:**\nThe paper addresses the fundamental problem in artificial intelligence (AI) of defining what intelligence is, particularly for machines. The authors propose a mathematical formalization of intelligence by extracting essential features from informal definitions given by experts. This leads to a general measure of intelligence applicable to arbitrary machines, which is related to the theory of universal optimal learning agents. The paper also surveys various tests and definitions of intelligence proposed for machines.\n\n**Key Concepts:**\n- **Intelligence Definition:** Intelligence is defined as an agent's ability to achieve goals in a wide range of environments.\n- **Agent-Environment Framework:** The interaction between an agent and its environment is modeled using actions, observations, and rewards. The agent's goal is to maximize the reward it receives.\n- **Universal Intelligence:** A formal measure of intelligence is proposed, which evaluates an agent's performance across all computable reward-summable environments. This measure is weighted by the Kolmogorov complexity of each environment, reflecting the principle of Occam's razor.\n- **Kolmogorov Complexity:** Used to measure the complexity of environments, it is defined as the length of the shortest program that can describe an environment.\n\n**Discussion:**\n- **Properties of Universal Intelligence:** The measure is valid, meaningful, informative, general, unbiased, and formal. However, it is not directly computable due to the use of Kolmogorov complexity.\n- **Comparison with Other Tests:** The paper compares universal intelligence with other tests like the Turing Test, compression tests, and psychometric AI, highlighting its advantages in terms of generality and formal foundation.\n- **Practical Implementation:** While the theoretical definition is not directly computable, the authors suggest approximating it by testing agents on a large sample of environments and weighting their performance by the complexity of these environments.\n\n**Conclusion:**\nThe paper provides a formal and general definition of machine intelligence that is grounded in computation, information, and complexity. It aims to offer a more rigorous foundation for measuring intelligence in machines compared to existing tests. Future work involves developing practical tests that approximate this theoretical measure.\n\n**Critiques and Responses:**\n- **Computability Assumption:** The authors argue that assuming environments are computable is reasonable given current physical theories.\n- **Bounded Reward Assumption:** The paper defends the assumption of bounded rewards by suggesting it reflects a realistic goal system.\n- **Block's and Searle's Arguments:** The authors respond to philosophical critiques by emphasizing the practical performance of agents over theoretical constructs like understanding or consciousness.\n\nOverall, the paper presents a novel approach to defining and measuring machine intelligence, which could have significant implications for the development and evaluation of AI systems.","notes":{"id":14,"name":"notes","body":null,"record_type":"Paper","record_id":14,"created_at":"2024-12-10T04:49:29.527Z","updated_at":"2024-12-10T04:49:29.527Z"},"created_at":"2024-12-10T04:49:15.974Z","updated_at":"2024-12-10T04:49:36.784Z"}