Algorithm Efficiency

Algorithms
definition

Also known as: time complexity, Big O

Grade 9-12

View on concept map

The ratio of useful output energy (or power) to total input energy, expressed as a percentage — always less than 100% due to energy losses. Efficiency determines whether software can handle real-world data sizes.

Definition

The ratio of useful output energy (or power) to total input energy, expressed as a percentage — always less than 100% due to energy losses.

💡 Intuition

Does doubling the data double the time? Or quadruple it? Or barely change it?

🎯 Core Idea

Algorithm efficiency matters increasingly as data grows—a slow algorithm on small data may fail completely on large data.

Example

O(n): linear—twice the data, twice the time. O(n^2): quadratic—twice the data, 4\times the time.

Notation

Big O notation O(f(n)) describes the upper bound on growth rate. n is the input size, and T(n) is the running time as a function of n.

🌟 Why It Matters

Efficiency determines whether software can handle real-world data sizes. Google's search engine processes billions of queries because it uses O(\log n) algorithms, not O(n^2). In fields from genomics to finance, choosing the right algorithm can mean the difference between seconds and centuries of computation.

💭 Hint When Stuck

When analyzing efficiency, first identify the input size n. Then count how many times the most-repeated operation executes as a function of n (look for nested loops). Finally, express the growth rate using Big O, dropping constants and lower-order terms.

Formal View

An algorithm is O(f(n)) if there exist constants c > 0 and n_0 such that for all n \geq n_0, the running time T(n) \leq c \cdot f(n). Common classes: O(1), O(\log n), O(n), O(n \log n), O(n^2), O(2^n).

🚧 Common Stuck Point

Big O ignores constant factors—2n and 100n are both O(n) because the growth rate is what matters.

⚠️ Common Mistakes

  • Confusing Big O (upper bound) with exact running time
  • Ignoring nested loops when counting operations
  • Assuming a faster algorithm is always better (ignoring constant factors for small inputs)

Frequently Asked Questions

What is Algorithm Efficiency in CS Thinking?

The ratio of useful output energy (or power) to total input energy, expressed as a percentage — always less than 100% due to energy losses.

When do you use Algorithm Efficiency?

When analyzing efficiency, first identify the input size n. Then count how many times the most-repeated operation executes as a function of n (look for nested loops). Finally, express the growth rate using Big O, dropping constants and lower-order terms.

What do students usually get wrong about Algorithm Efficiency?

Big O ignores constant factors—2n and 100n are both O(n) because the growth rate is what matters.

Prerequisites

Next Steps

How Algorithm Efficiency Connects to Other Ideas

To understand algorithm efficiency, you should first be comfortable with algorithm. Once you have a solid grasp of algorithm efficiency, you can move on to searching and sorting.

💻 Animated Visualization Animated

Compare how different algorithms scale with input size