Problems
Generally speaking, a problem is a task that an algorithm is trying to solve**.** An instance of the problem is a problem with a specific input**.** The example the College Board CED gives is that sorting is a problem while sorting the list [2, 3, 1, 7] is an instance of the problem.
Some examples of common problem types are decision problems and optimization problems. Decision problems have a yes/no answer, while optimization problems ask what the best solution is to the task at hand.
Finding out if a number is prime is a decision problem, because you can answer that question with a yes or a no. Finding the shortest path between two cities, on the other hand, is considered an optimization problem. It wants the best answer (in this case the shortest), not just an answer.

Efficiency
An algorithm's efficiency is an estimate of how many computational resources (like power, memory or time) it uses. It's officially expressed as a function of the size of the input. This just means that the efficiency varies based on the input size—the bigger the input, the more resources it'll use, and vice versa. There are formal equations to calculate it, but you don't have to know them for the AP CSP test.
Informally, efficiency is measured by determining how many times a statement or statement group executes. Algorithms that run with a polynomial efficiency or lower are said to run in a reasonable amount of time while algorithms that run with an exponential or factorial efficiency run in an unreasonable amount of time.
Different correct algorithms for the same problem can have different efficiencies, just like how different ways to solve a problem can take longer or shorter or be more or less effective.
Some problems can't be solved in a reasonable amount of time. In this case, computers turn to an approximate solution. The technique to find this approximate solution is known as a heuristic.
Frequently Asked Questions
How do I tell if my algorithm is efficient or not?
You tell if an algorithm is “efficient” by estimating how its resource use (usually time) grows as the input size increases. For AP CSP, do this informally—count how many times key statements run for different input sizes (EK AAP-4.A.5). If running time grows like a polynomial (constant, linear, quadratic, cubic, etc.) it’s considered reasonable; if it grows exponentially or factorially, it’s usually impractical (EK AAP-4.A.7). Different correct algorithms can have very different efficiencies (EK AAP-4.A.6), so compare counts on the same problem instance. If no efficient algorithm exists for a problem instance, use a heuristic or approximation that gives good-enough answers faster (EK AAP-4.A.8–9). For a quick review, see the Topic 3.17 study guide (https://library.fiveable.me/ap-computer-science-principles/unit-3/algorithmic-efficiency/study-guide/jGSWIqW49BtrQ8dqCWFd) and try practice questions (https://library.fiveable.me/practice/ap-computer-science-principles).
What's the difference between a problem and an instance of a problem?
A problem is the general task you want solved; an instance is that problem with concrete input. For example (from the CED): “sorting” is a problem; sorting the list (2, 3, 1, 7) is a specific instance of that problem (EK AAP-4.A.1). Problems can be decision (yes/no) or optimization (find the best solution)—each instance supplies the actual data the algorithm runs on (EK AAP-4.A.2). This distinction matters for efficiency: you judge an algorithm’s resources as a function of input size (EK AAP-4.A.3–AAP-4.A.6), and some problem instances may be easy while others (same problem, different inputs) are hard. For more on algorithmic efficiency and examples, check the Topic 3.17 study guide (https://library.fiveable.me/ap-computer-science-principles/unit-3/algorithmic-efficiency/study-guide/jGSWIqW49BtrQ8dqCWFd). For extra practice, Fiveable has lots of practice problems (https://library.fiveable.me/practice/ap-computer-science-principles).
I'm confused about what makes an algorithm run in "reasonable time" - can someone explain?
“Reasonable time” basically means the algorithm’s runtime grows at a manageable rate as input size increases. In AP CSP you don’t need formal Big-O math, but the CED says algorithms that are polynomial time or lower (constant, linear, quadratic, cubic, etc.) are considered reasonable; exponential or factorial growth is usually not. Practically, linear (do one pass through the data) and quadratic (nested loops over data) are often OK for small/medium inputs, while trying every possible combination (brute force) can blow up fast and become impractical. You can estimate runtime informally by counting how many times key statements run (EK AAP-4.A.5). When no efficient exact algorithm exists for an optimization problem, use heuristics or approximation methods to get a good-enough answer faster (EK AAP-4.A.8–4.A.9). For more AP-aligned review, see the Topic 3.17 study guide (https://library.fiveable.me/ap-computer-science-principles/unit-3/algorithmic-efficiency/study-guide/jGSWIqW49BtrQ8dqCWFd) and unit resources (https://library.fiveable.me/ap-computer-science-principles/unit-3). Practice problems are at (https://library.fiveable.me/practice/ap-computer-science-principles).
What's the syntax for measuring how many times a `for` loop executes?
Two common ways to measure how many times a for loop executes: 1) Use the loop index or bounds (informal count). If your loop is “for i from 1 to n do …” it executes n times. That’s often enough for AP CSP: express runtime as a function of input size (n) per EK AAP-4.A.5. 2) Add an explicit counter inside the loop. Initialize count ← 0 before the loop, then inside the loop do count ← count + 1. After the loop, count holds the number of iterations. Examples: - Pseudocode: count ← 0 FOR i ← 1 TO n count ← count + 1 END FOR // count = n - Python: count = 0 for i in range(n): # runs n times count += 1 For AP exam questions, either showing the loop bounds (e.g., “for i = 1 to n → n iterations”) or a short counter like above is fine. For more practice, see the Topic 3.17 study guide (https://library.fiveable.me/ap-computer-science-principles/unit-3/algorithmic-efficiency/study-guide/jGSWIqW49BtrQ8dqCWFd) or unit resources (https://library.fiveable.me/ap-computer-science-principles/unit-3).
When should I use a heuristic solution instead of trying to find the perfect answer?
Use a heuristic when finding a guaranteed optimal solution is impractical because of input size, time, or resource limits. The CED says algorithms with polynomial time (constant, linear, quadratic, etc.) run in a reasonable time, while exponential or factorial ones often don’t (EK AAP-4.A.7, A.8). So choose a heuristic when: - The exact algorithm would take exponential/factorial time for your input size (too slow). - You need a result fast (real-time systems, user-facing apps, or limited CPU/memory). - “Good enough” answers are acceptable (optimization problems where near-optimal solutions suffice). - You want predictable, repeatable runtimes instead of unpredictable long waits. Heuristics trade optimality for speed and simplicity (EK AAP-4.A.9). For AP prep, remember to explain why a heuristic is appropriate in a prompt (time/resource constraints, input size) and cite that you’re using an approximation—you won’t be graded down for using heuristics when optimal methods are impractical. For review, check the Topic 3.17 study guide (https://library.fiveable.me/ap-computer-science-principles/unit-3/algorithmic-efficiency/study-guide/jGSWIqW49BtrQ8dqCWFd) and try practice problems (https://library.fiveable.me/practice/ap-computer-science-principles).
What's the difference between a decision problem and an optimization problem?
A decision problem asks a yes/no question about an instance (e.g., “Is there a path from A to B?”). An optimization problem asks you to find the best solution among many (e.g., “What is the shortest path from A to B?”). In AP terms: both are “problems” (EK AAP-4.A.1), but decision problems return a boolean (EK AAP-4.A.2) while optimization problems return a best value or choice. Efficiency matters for both: some instances can be solved in polynomial (reasonable) time, others may require exponential/factorial time and be impractical (EK AAP-4.A.7, AAP-4.A.8). When exact optimization is impractical, you use heuristics or approximations that give good but not guaranteed-optimal answers (EK AAP-4.A.9). For more examples and quick review, check the Topic 3.17 study guide (https://library.fiveable.me/ap-computer-science-principles/unit-3/algorithmic-efficiency/study-guide/jGSWIqW49BtrQ8dqCWFd) and practice problems (https://library.fiveable.me/practice/ap-computer-science-principles).
How do I count the number of operations in my code to check efficiency?
Counting operations (informally) is how you estimate an algorithm’s efficiency for AP CSP. Do this: - Identify the input size n (e.g., length of a list). EK AAP-4.A.1 and AAP-4.A.3 focus on input size and resources. - Count how many times each statement or block runs (EK AAP-4.A.5). Single statements = constant time. A loop that runs n times → body runs n times. A nested loop of two loops each to n → ~n·n = n^2 times. Conditionals: count the statements executed on each branch, but use the worst- or typical-case you care about. - Add counts for sequential steps. Multiply for nested repetition. Ignore fixed constants when comparing growth (you’re comparing polynomial vs exponential behavior per EK AAP-4.A.7). - If the count grows exponentially or factorially, note it’s likely “unreasonable” and consider heuristics or approximations (EK AAP-4.A.7–4.A.9). For practice and more examples, check the Topic 3.17 study guide (https://library.fiveable.me/ap-computer-science-principles/unit-3/algorithmic-efficiency/study-guide/jGSWIqW49BtrQ8dqCWFd) and try problems at (https://library.fiveable.me/practice/ap-computer-science-principles).
Why does my nested `for` loop take so much longer than a single loop?
A single loop does work proportional to the input size (you visit each item once). A nested for loop often does work proportional to the square of the input size because for every iteration of the outer loop you run the inner loop fully. Informally count statements: a single loop runs about n times; two nested loops often run about n × n = n^2 times. That’s why when n grows a little, runtime grows a lot faster for nested loops (polynomial: linear vs. square). The CED says you can measure efficiency by counting how many times statements run (EK AAP-4.A.5)—you don’t need formal Big-O for AP CSP, just the idea that some algorithms run “reasonable” time (polynomial) and others don’t. If nested loops are too slow, try reducing repeated work, using a better data structure, or a heuristic if an exact solution is impractical (EK AAP-4.A.7, AAP-4.A.9). For more examples and review, see the Topic 3.17 study guide (https://library.fiveable.me/ap-computer-science-principles/unit-3/algorithmic-efficiency/study-guide/jGSWIqW49BtrQ8dqCWFd) and extra practice (https://library.fiveable.me/practice/ap-computer-science-principles).
I don't understand polynomial vs exponential efficiency - which one is better?
Polynomial is better—algorithms that run in polynomial time (constant, linear, quadratic, cubic, etc.) scale reasonably as input size grows and are considered practical on the AP CSP CED (EK AAP-4.A.7). Exponential (or factorial) time grows so fast that even small increases in input size make them impractical (EK AAP-4.A.3). Intuition: doubling input for a linear algorithm doubles work; for an exponential algorithm it squares or worse. That’s why some problems (especially hard optimization/search problems) have no known polynomial solution and you use heuristics or approximations instead (EK AAP-4.A.8–4.A.9). Note AP CSP doesn’t require formal Big-O math, but you should be able to identify “reasonable” (polynomial or lower) vs “unreasonable” (exponential/factorial) runtimes and when heuristics are appropriate. For a quick review, see the Topic 3.17 study guide (https://library.fiveable.me/ap-computer-science-principles/unit-3/algorithmic-efficiency/study-guide/jGSWIqW49BtrQ8dqCWFd) and try practice questions (https://library.fiveable.me/practice/ap-computer-science-principles).
Can someone help me trace through this sorting algorithm to see how many comparisons it makes?
To count comparisons, trace each time the algorithm checks two elements (a conditional like “if A[i] < A[j]”). Example: selection sort on a list of n items always compares the current candidate to each remaining item. Pass 1: compare n−1 times, pass 2: n−2, …, last pass: 1. Total comparisons = (n−1)+(n−2)+…+1 = n(n−1)/2. So for n = 6 you get 15 comparisons. If it’s insertion sort, count comparisons until each item is inserted: best case (already sorted) ≈ n−1 comparisons; worst case (reverse sorted) ≈ n(n−1)/2 comparisons. Bubble sort compares adjacent pairs each pass: up to (n−1) comparisons per pass, repeated until sorted. So: 1) identify which pairwise check is the “comparison,” 2) simulate passes, tally comparisons per pass, 3) add them. For AP review see the Algorithmic Efficiency study guide (https://library.fiveable.me/ap-computer-science-principles/unit-3/algorithmic-efficiency/study-guide/jGSWIqW49BtrQ8dqCWFd) and practice problems (https://library.fiveable.me/practice/ap-computer-science-principles).
What are some examples of problems that can't be solved in reasonable time?
Short answer: problems whose best-known algorithms take exponential or factorial time—they don't finish in a “reasonable” time as input size grows. Examples you should know (all are optimization/decision problems where brute force is impractical): - Traveling Salesman Problem (TSP): finding the shortest route through N cities—checking every route is factorial time, so it explodes quickly. - Subset-Sum / Knapsack: deciding if some subset meets a target or finding best-value subset—brute force is exponential. - Boolean satisfiability (SAT): deciding if a logic formula can be true—worst-case exponential for known algorithms. - Graph coloring and many scheduling/packing problems (NP-hard): best exact solutions scale poorly. Why it matters for AP CSP: the CED says polynomial-time algorithms are “reasonable”; exponential/factorial are not (EK AAP-4.A.7–8). For these hard problems we often use heuristics or approximation algorithms that run fast but don’t always give the optimal answer (EK AAP-4.A.9). For a quick review, check the Topic 3.17 study guide (https://library.fiveable.me/ap-computer-science-principles/unit-3/algorithmic-efficiency/study-guide/jGSWIqW49BtrQ8dqCWFd) and try practice questions (https://library.fiveable.me/practice/ap-computer-science-principles).
How do I write code that measures how long my algorithm takes to run?
Two ways: informal counting (CED-approved) and actual timing in code. - Count key operations: pick the statement(s) that dominate work (like comparisons or swaps) and add a counter variable you increment each time they run. That gives an informal efficiency measure tied to input size n (EK AAP-4.A.5). It’s what AP expects more than formal Big-O math. - Time the program: use a high-resolution clock in your language and run the algorithm several times with different input sizes and averaged runs. Examples: - Python: use time.perf_counter() before/after to get seconds. - Java: use System.nanoTime() and convert to seconds. - JavaScript: use performance.now() in browsers. Make sure to run each test multiple times, warm up caches/JVM, and vary input sizes to see how runtime grows (polynomial vs exponential, EK AAP-4.A.7). For AP prep, practice both counting operations and timing on sample problems (see the Topic 3.17 study guide (https://library.fiveable.me/ap-computer-science-principles/unit-3/algorithmic-efficiency/study-guide/jGSWIqW49BtrQ8dqCWFd) and lots of practice questions (https://library.fiveable.me/practice/ap-computer-science-principles)).
When do I know if I should look for an approximate solution instead of the exact one?
You should look for an approximate (heuristic) solution when finding an exact one is impractical because of efficiency. In AP terms: if the problem is an optimization problem and the best-known exact algorithms grow exponentially or factorially with input size (unreasonable time), use a heuristic or approximation instead. For small n, brute-force exact search might be fine; if n grows and runtime becomes exponential, switch to an approximate method that runs in polynomial time (constant, linear, quadratic, etc.). Heuristics don’t guarantee optimality but give good solutions fast—useful for big inputs, real-time needs, or limited resources. This is explicitly covered in EK AAP-4.A.7–9. For quick review, see the Topic 3.17 study guide (https://library.fiveable.me/ap-computer-science-principles/unit-3/algorithmic-efficiency/study-guide/jGSWIqW49BtrQ8dqCWFd) and practice problems (https://library.fiveable.me/practice/ap-computer-science-principles) to spot when heuristic choices are appropriate on the exam.
What's the difference between linear and quadratic efficiency in simple terms?
Linear vs. quadratic efficiency in simple terms: both describe how the work an algorithm does grows as the input size (n) grows. Linear means work grows proportionally to n—usually one loop over the data (do ~n steps). Quadratic means work grows proportional to n²—usually nested loops (for each item, you loop over every item, so ~n × n steps). Concrete: if n = 100, a linear algorithm does ~100 steps; a quadratic one does ~10,000 steps. Both are polynomial and “reasonable” per the CED, but quadratic gets expensive fast as n increases. For AP CSP you can explain efficiency informally by counting how many times statements run (EK AAP-4.A.5) and recognize when heuristics or better algorithms are needed (EK AAP-4.A.7, AAP-4.A.9). For more review see the Topic 3.17 study guide (https://library.fiveable.me/ap-computer-science-principles/unit-3/algorithmic-efficiency/study-guide/jGSWIqW49BtrQ8dqCWFd) and try practice problems (https://library.fiveable.me/practice/ap-computer-science-principles).
Why do some correct algorithms for the same problem run faster than others?
Different correct algorithms can run faster because they use fewer computational resources (usually time) as the input size grows. Efficiency depends on how many times key statements or loops execute for an input instance—some approaches (like brute-force) repeatedly check many possibilities, while smarter ones (like greedy or divide-and-conquer) reduce the number of checks. The CED calls this “efficiency” as a function of input size and notes you can informally measure it by counting statement executions (EK AAP-4.A.3, AAP-4.A.5, AAP-4.A.6). Algorithms whose work grows as a polynomial (constant, linear, quadratic, etc.) run in reasonable time; ones that grow exponentially or factorially usually don’t (EK AAP-4.A.7). When exact optimal methods are impractical, a heuristic or approximation can give good-enough answers faster (EK AAP-4.A.9). For review, see the Topic 3.17 study guide (https://library.fiveable.me/ap-computer-science-principles/unit-3/algorithmic-efficiency/study-guide/jGSWIqW49BtrQ8dqCWFd) and practice problems (https://library.fiveable.me/practice/ap-computer-science-principles).