Algorithms to Live By: The Computer Science of Human Decisions cover
CoreOfBooks

Algorithms to Live By: The Computer Science of Human Decisions

Brian Christian & Tom Griffiths • 353 pages original

Difficulty
4/5
29
pages summary
62
min read
audio version
0
articles
PDF

Quick Summary

The book "Algorithms to Live By" explores how computer science principles offer practical solutions to everyday human challenges. It reveals that common dilemmas—from finding a romantic partner to organizing a home—can be understood and optimized through algorithms like optimal stopping, explore/exploit tradeoffs, and sorting. The authors argue that many human "failures" are actually reflections of inherent computational difficulties. By applying concepts such as caching, scheduling, Bayesian inference, and game theory, individuals can make better decisions, manage uncertainty, and navigate time constraints more effectively. The book encourages a shift from seeking perfect solutions to embracing computationally kind and efficient heuristics for a more fulfilling life.

Chat is for subscribers

Upgrade to ask questions and chat with this book.

Key Ideas

1

Optimal stopping helps decide when to commit, like in dating or job searching, often following the 37% rule.

2

The explore/exploit tradeoff balances trying new things with sticking to known preferences, adapting based on remaining time.

3

Efficient sorting methods in computer science offer insights into organizing physical and digital information.

4

Bayesian inference allows for better predictions by combining prior beliefs with new evidence, even from small data.

5

Understanding computational intractability and using relaxation or randomness can lead to practical solutions for complex problems.

Optimal Stopping

This section introduces the concept of optimal stopping problems, illustrating the challenge of deciding when to commit in scenarios with limited, irreversible options. The 37% rule suggests exploring 37% of opportunities to establish a baseline, then choosing the first option that surpasses everything previously encountered. This strategy balances information gathering with timely commitment in various life decisions.

The most critical decision is often simply when to stop.

Explore/Exploit

The explore/exploit tradeoff examines the dilemma between trying new things (exploration) and sticking with known preferences (exploitation). This balance is crucial for a fulfilling life and depends on the remaining time in a given interval. Modeled by the multi-armed bandit problem, decisions must consider the long-term sequence, valuing exploration more when there's more time to benefit.

Exploration is the act of gathering new information, while exploitation is the use of existing knowledge to secure a known good result.

Sorting

Sorting is a fundamental operation in computing and daily life, essential for organizing information from email inboxes to search results. Despite its importance, sorting is prone to diseconomies of scale, meaning unit costs rise with item quantity. Efficient algorithms like Mergesort (divide-and-conquer) and Bucket Sort (using prior knowledge) break the quadratic barrier, while Big-O notation evaluates worst-case performance.

Caching

Caching involves managing memory hierarchies to balance speed and capacity by keeping frequently accessed information in easily accessible locations. The Least Recently Used (LRU) policy is a common and effective heuristic, assuming recently used items will be needed again. This principle applies to computer systems, library organization, and even domestic filing, demonstrating that forgetting can be an optimal tuning to environmental needs.

Scheduling

Scheduling optimizes tasks by choosing the correct metric for success. Strategies like Earliest Due Date minimize lateness, while Shortest Processing Time clears to-do lists quickly, especially when weighted by importance. Issues like priority inversion and precedence constraints make most scheduling problems intractable. Preemption and interrupt coalescing are strategies to manage uncertainty and context switching costs.

The difficulty people encounter when trying to perfectly manage a complex calendar is a result of the inherent mathematical complexity of the problem rather than a personal failure.

Bayes's Rule

Bayes's Rule provides a framework for predicting the future from small data by reasoning backward from evidence and updating prior beliefs. Laplace’s Law offers a simple formula for estimating probabilities, while the Copernican Principle forecasts durations based on current age. Understanding normal vs. power-law distributions is crucial for accurate predictions in various real-world phenomena.

Overfitting

Overfitting occurs when a model becomes too complex, perfectly fitting past data (including noise) but failing to make accurate future predictions. This highlights the danger of data idolatry and over-reliance on proxy metrics. Strategies like cross-validation detect overfitting, while regularization combats it by penalizing complexity, often leading to better results with simpler heuristics in uncertain environments.

Relaxation

Many real-world problems, like the traveling salesman problem, are computationally intractable, meaning they lack efficient optimal solutions. Relaxation techniques address this by simplifying constraints: constraint relaxation removes rules, continuous relaxation uses fractions, and Lagrangian relaxation turns rules into costs. These methods provide useful approximations and lower bounds, making hard problems manageable.

Randomness

Randomness is a powerful tool in computer science, offering efficient approximate solutions to difficult problems faster than deterministic methods. Sampling (e.g., Monte Carlo) estimates complex quantities, while randomized algorithms (e.g., Miller-Rabin) test primality. Randomness helps escape local maxima in optimization through techniques like simulated annealing, and is also a cornerstone of evolution and creativity.

Networking

Networking relies on protocols for communication, breaking data into packets for robustness (packet switching). Acknowledgment ensures reliability over unreliable channels (Byzantine generals problem). Exponential backoff resolves interference in crowded channels, while flow control (AIMD algorithm) manages congestion. Bufferbloat highlights that latency is often more critical than raw capacity, encouraging strategies like "tactical dropped balls."

Game Theory

Game theory analyzes interactions between multiple agents, involving recursive thinking about others' actions. A Nash equilibrium is a stable state, but finding it can be intractable. The prisoner's dilemma illustrates how individual rationality can lead to collectively worse outcomes. Mechanism design aims to change game rules to encourage desirable behaviors, often through external consequences or evolutionary "internal mechanism designers."

Computational Kindness

Computational kindness advocates for framing interactions to minimize others' cognitive labor. This means providing clear preferences, limiting options, and designing systems (like linear parking lots or real-time transit displays) that reduce mental strain. By simplifying decision-making and prioritizing social cooperation over exhaustive calculation, algorithmic principles enhance navigating complex daily existence.

Frequently Asked Questions

What is the "37% rule" and how can it be applied?

The 37% rule is an optimal stopping strategy. For choices with a finite, unknown pool of options, explore the first 37% to gather data, then commit to the first option better than any previously seen. This helps find a good outcome without waiting too long.

How do we balance exploration and exploitation in daily life?

Balancing exploration (trying new things) and exploitation (sticking to known favorites) depends on your time horizon. Explore more when you have more time to benefit from new discoveries. Exploit more as time runs out, maximizing known satisfaction.

Why is "messiness" sometimes an optimal strategy according to the book?

When the cost of organizing items (sorting) exceeds the cost of searching for them, messiness becomes optimal. With efficient digital search tools, for instance, the value of meticulous sorting diminishes, making it rational to prioritize quick retrieval over perfect order.

How does Bayes's Rule help in making predictions from small data?

Bayes's Rule allows us to update our beliefs about future events based on new evidence and existing prior probabilities. By understanding the underlying statistical distribution (e.g., normal or power-law), we can make more accurate forecasts, even with limited observations.

What does "computational kindness" mean in the context of human interactions?

Computational kindness means designing interactions and systems to minimize the cognitive effort required of others. By being clear, limiting choices, and reducing mental friction, we foster easier communication and cooperation, improving overall social outcomes and reducing stress.