- Home
- /
- CS Thinking
- /
- Concept Map
- /
- Algorithms
Algorithms Concepts
9 concepts · Grades 6-8, 9-12 · 9 prerequisite connections
This family view narrows the full concept map to one connected cluster. Read it from left to right: earlier nodes support later ones, and dense middle sections usually mark the concepts that hold the largest share of future work together.
Use the graph to plan review, then use the full concept list below to open precise pages for definitions, examples, and related content. That combination keeps the page useful for both human study flow and crawlable internal linking.
Concept Dependency Graph
Concepts flow left to right, from foundational to advanced. Hover to highlight connections. Click any concept to learn more.
Connected Families
Algorithms concepts have 11 connections to other families.
How Algorithms Connects to Other Topics
Algorithms concepts build on and feed into concepts across other families. Understanding these connections helps you plan what to study before and after.
Builds on
All Algorithms Concepts
Recursion
A technique where a function calls itself to solve progressively smaller instances of the same problem. Every recursive solution has two parts: a base case that stops the recursion, and a recursive case that breaks the problem into a smaller version of itself.
"Russian dolls—open one, find a smaller one inside. Repeat until you reach the smallest."
Why it matters: Recursion produces elegant, concise solutions for problems with naturally self-similar structure—tree traversals, fractals, file system navigation, and divide-and-conquer algorithms all rely on recursion as their natural expression.
Algorithm Efficiency
The ratio of useful output energy (or power) to total input energy, expressed as a percentage — always less than 100% due to energy losses.
"Does doubling the data double the time? Or quadruple it? Or barely change it?"
Why it matters: Efficiency determines whether software can handle real-world data sizes. Google's search engine processes billions of queries because it uses $O(\log n)$ algorithms, not $O(n^2)$. In fields from genomics to finance, choosing the right algorithm can mean the difference between seconds and centuries of computation.
Searching
The process of locating a specific item or value within a collection of data using a systematic strategy. The two fundamental approaches are linear search (check every element one by one) and binary search (repeatedly halve the search space in sorted data).
"Looking for a book on a shelf, a name in a list, a file on your computer."
Why it matters: Searching is one of the most common operations in computing. Databases, search engines, spell checkers, and file systems all depend on efficient search algorithms to find information quickly among millions or billions of items.
Sorting
Rearranging items in a collection into a defined order, such as smallest to largest or alphabetical. Sorting is one of the most studied problems in computer science, with algorithms ranging from simple (bubble sort, $O(n^2)$) to efficient (merge sort, $O(n \log n)$).
"Putting things in order—alphabetical, numerical, by date—so they are easier to find and use."
Why it matters: Sorted data enables much faster searching and makes output far easier for humans to read. Sorting is a prerequisite for binary search and is used everywhere—from organizing search results to rendering graphics to scheduling tasks in operating systems.
Binary Search
An efficient algorithm for finding a target value in a sorted list by repeatedly halving the search range. At each step, compare the target to the middle element: if equal, the search is done; if smaller, search the left half; if larger, search the right half.
"Looking up a word in a dictionary: open to the middle, then go left or right depending on where your word falls."
Why it matters: Binary search is orders of magnitude faster than linear search for large sorted datasets. Searching 1 billion sorted items takes at most 30 comparisons with binary search, compared to up to 1 billion with linear search.
Linear Search
A search algorithm that checks each element in a list one by one, starting from the first, until the target is found or the entire list has been examined. It is the simplest search algorithm and works on both sorted and unsorted data.
"Looking for your keys by checking every pocket and drawer in order."
Why it matters: Linear search works on any list regardless of order, making it the most universally applicable search algorithm. It serves as the baseline against which more efficient algorithms like binary search are measured.
Merge Sort
A divide-and-conquer sorting algorithm that splits a list in half, recursively sorts each half, then merges the two sorted halves back together in order. The key insight is that merging two already-sorted lists into one sorted list is efficient and straightforward.
"Split a messy deck of cards in half, sort each half, then interleave them back in order."
Why it matters: Merge sort is one of the most efficient general-purpose sorting algorithms and is used in many language standard libraries. Its guaranteed $O(n \log n)$ worst-case performance makes it reliable for any input, unlike quicksort which can degrade to $O(n^2)$.
Bubble Sort
A simple sorting algorithm that repeatedly walks through the list, compares each pair of adjacent elements, and swaps them if they are in the wrong order. This process repeats until no more swaps are needed, meaning the list is fully sorted.
"Heavier bubbles sink and lighter bubbles rise — larger values slowly move to the end of the list."
Why it matters: Bubble sort is easy to understand and implement, making it an excellent teaching example for understanding how sorting works. However, its $O(n^2)$ performance makes it impractical for large datasets, which is why more efficient algorithms like merge sort are used in practice.
Divide and Conquer
Divide and conquer is an algorithmic strategy that splits a problem into smaller subproblems of the same kind, solves those smaller problems, and then combines their solutions into one final answer. It is a structured form of decomposition often paired with recursion.
"Break a big hard task into smaller versions of the same task, solve each one, then stitch the answers together."
Why it matters: Many of the fastest algorithms in computer science use divide and conquer. It helps students understand recursion, efficient sorting, and the structure of complex algorithms.