Introduction
Everyone who are familiar with programming knows how to calculate the time complexity in algorithm just by staring at the code, however, few people really know how it works. This article aims to introduce the basic mathematical concepts in real analysis to help you build up deeper understanding of algorithm complexity. After introducing those concepts, I will discuss their relatability with examples.
Boundedness of a Function
In the study of mathematics, people delves into many properties of functions on \(\mathbb{R}\). We all know that for some common functions, even for elementary functions such as \(f(x)=\frac{1}{x}\), we cannot find horizontal or vertical asymptotes.
An Interesting Example

Intuitively, we find \(f(x) > 0\) always holds on the right branch, while \(f(x) < 0\) for the left branch. This is because the horizontal asymptote \(y=0\), and we all know $$\lim_{x\to \infty} \frac{1}{x} = 0.$$
It is also quite plausible to explain the vertical asymptotes, as $$\lim_{x \to 0^+} \frac{1}{x} = +\infty,$$ and $$\lim_{x \to 0^-} \frac{1}{x} = -\infty.$$
You may wonder why I spend these words on introducing the superficial notion, but this is just to allow as take a glimpse of the idea of boundedness. In this case, the function, in both left and right branches, stretches rampantly in different direction, yet they never reach the axis. Noticeably, the graph is bounded by the x-axis, but the case seems a bit different on the two branches, as they are bounded on different directions. But that is not true, by taking the absolute value of the left branch, we will get a new left branch that us symmetric to the right branch with respect to the y-axis, and that makes the whole thing easy, since we can just take it as \(f(x)=|\frac{1}{x}|\), as the right branch is not affected by the absolute value function. Thus, we can show that
$$|\frac{1}{x}| > 0 \iff (\lim_{x \to +\infty} \frac{1}{x} = 0 \land \lim_{x \to -\infty} \frac{1}{x} = 0).$$ So we can say, mathematically, that this function is bounded above 0.
Surprisingly this is exactly the formal definition of boundedness, in the same way, we can define bounded below.
Here, we provide the formal definition to bounded below and above:
Let \( f \) be a function defined on a set \( D \). If there exists a constant \( M \in \mathbb{R} \) such that for every \( x \in D \), the inequality
$$f(x) \leq M \quad (\text{or } f(x) \geq N)$$ holds, then \( f \) is called bounded above (or bounded below) on \( D \) by \( M \) (or \( N \)). Formally, if \( M \) (or \( N \)) is an upper (or lower) bound of \( f \), any number less than (or greater than) \( M \) (or \( N \)) is also an upper (or lower) bound of \( f \) on \( D \). That is,
$$\exists M \in \mathbb{R}, \forall x \in D, f(x) \leq M \quad (\text{or } \exists N \in \mathbb{R}, \forall x \in D, f(x) \geq N).$$
But this is only the definition to the behaviour of being bounded for a function, we still have another two definitions to determine whether a function is bounded or unbounded, so that we can categorize them.
Bounded Function
Let \( f \) be a function defined on a set \( D \). If there exist constants \( N, M \in \mathbb{R} \) such that \( N \leq M \) and for every \( x \in D \), the inequality \[N \leq f(x) \leq M \]holds, then \( f \) is called bounded on \( D \). We often express this by saying: A function \( f \) defined on \( D \) is called bounded if there exists a constant \( M > 0 \) such that for every \( x \in D \), the inequality \[|f(x)| \leq M \]holds. Formally,\[\exists M \in \mathbb{R}, \forall x \in D, |f(x)| \leq M. \] In this case, if \( f \) is bounded on \( D \), its graph will lie completely between the lines \( y = M \) and \( y = -M \).
(Note: Now you should understand the uniqueness of \(y = \frac{1}{x}\) in its properties. This function is unbounded over its entire domain (except at \(x = 0\)). It doesn’t have fixed upper bounds \(M\) or lower bounds \(-M\), but instead can take arbitrarily large positive or negative values as \(x\) approaches 0.)
Simply by negate this by De Morgan’s Law, we can get the formal definition to Unbounded Function.
Unbounded Function
Let \( f \) be a function defined on a set \( D \). If for every positive constant \( M \in \mathbb{R} \), there exists \( x_\infty \in D \) such that the inequality
\[
|f(x_\infty)| > M
\]
holds, then \( f \) is called unbounded on \( D \). Formally,
\[
\forall M > 0, \exists x_\infty \in D, |f(x_\infty)| > M.
\]
(So \(y = \frac{1}{x}\) is actually a unbounded function.)
Time Complexity of Algorithm
With those knowledge provided, we can examine the time complexity theories from a completely different perspective. I would call time complexity of algorithm “an elegant application of theories on function boundedness”.
Most of us started learning algorithm with the big \(O\) notation, which is applied in the analysis of time spending for running an algorithm of a certain size of input, in the most pessimistic estimation. Let’s take a look at the definition.
Worst Case Time Complexity
Let \( f(x) \) be a function. We define \( O(f(x)) \) as follows: \[ O(f(x)) = \{ g(x) : \exists C > 0, \exists x_0 \in \mathbb{R}, \text{ such that } \forall x > x_0, |g(x)| \leq C |f(x)| \} \]
Yes, rigorously speaking, big \(O\) notation is a set of function. Many people failed to realize this.
In natural language, \( O(f(x)) \) represents the set of functions \( g(x) \) for which there exists a positive constant \( C \) and a real number \( x_0 \) such that for all \( x \) greater than \( x_0 \), the absolute value of \( g(x) \) is at most \( C \) times the absolute value of \( f(x) \). This means that as \( x \) becomes very large, \( g(x) \) grows no faster than \( f(x) \) up to a constant factor.
From the perspective of a function (in practice, this is the expression of the exact time complexity such as \(n^3+2n+2\), etc.), we say that a function \( f(n)\) is \( O(g(n)) \) exactly when \( f(n) \in O(g(n)) \). That is
\[ \exists c > 0 \text{ and } n_0 \in \mathbb{N} \text{ such that } \forall n > n_0, \; 0 \leq f(n) \leq c \cdot g(n). \]
In other words, the function \( f(n) \) is said to be \( O(g(n)) \) if there exists a positive constant \( c \) and a natural number \( n_0 \) such that for all \( n \) greater than \( n_0 \), the value of \( f(n) \) is bounded above by \( c \cdot g(n) \) and is non-negative.
Relatability to Boundedness Theory
If you have mathematical intuition above average, you may realize that the definition is equivalent to say that the function of complexity is a bounded function with special upper boundary, which is some constant \(c * g(x)\). Let’s take a closer look.
This definition of Big-O notation is equivalent to saying that the function \( f(n) \) has an upper bound. Specifically, the condition \( |f(n)| \leq c \cdot |g(n)| \) for all \( n > n_0 \) means that the function \( f(n) \) does not grow faster than a constant multiple of \( g(n) \). Since \( c \) and \( n_0 \) are constants, this implies that there is a fixed upper limit on the growth rate of \( f(n) \), relative to \( g(n) \). (Note that we can apply absolute value here because for all the time complexity functions, we have the assumption that there range must be greater than zero, which is decided by its nature or practical meaning).
Let’s be more meticulous, since we want a precise proof but not ambiguous postulation. Recall the definition that \[\exists M \in \mathbb{R}, \forall x \in D, |f(x)| \leq M. \]
Now we consider \[ \exists c > 0 \text{ and } n_0 \in \mathbb{N} \text{ such that } \forall n > n_0, \; 0 \leq f(n) \leq c \cdot g(n). \]
As we mentioned, \(0 \leq f(n) \leq c \cdot g(n) \equiv |f(n)| \leq c \cdot |g(n)|\). It is obvious that \(c \cdot |g(n)|\) fits perfecly into \(\exists M \in \mathbb{R}\), as the function must be a real function, and \(c\in\mathbb{R}\), and multiplication is closed on the field of real numbers, thus \(c \cdot g(n) \in \mathbb{R}\). Now, the only difference is the precondition, and we know that \((0, +\infty) \in \mathbb{R}\).
Hence, we have shown that $$(\exists c > 0 \text{ and } n_0 \in \mathbb{N} \text{ such that } \forall n > n_0, \; 0 \leq f(n) \leq c \cdot g(n)) \to (\exists M \in \mathbb{R}, \forall x \in D, |f(x)| \leq M).$$
This can be interpreted as the first statement is a special case of the latter, since it has a stronger precondition and the same postcondition.
Best Case Time Complexity
Similar to worse case complexity, best case time complexity describes the scenario where the algorithm performs the least number of operations, often in the most favourable conditions. For a function \(f(n)\), best case time complexity is defined using the Big-Omega notation:
\[\exists c > 0 \text{ and } n_0 \in \mathbb{N} \text{ such that } \forall n > n_0, \; T_{\text{best}}(n) \geq c \cdot g(n).\]
This definition is equivalent to the concept of a lower bound for a function. Specifically, the condition \(T_{\text{best}}(n) \geq c \cdot g(n)\) for all \(n > n_0\) implies that the function \(T_{\text{best}}(n)\) grows at least as fast as a constant multiple of \(g(n)\). Since \(c\) and \(n_0\) are constants, this provides a fixed lower limit on the growth rate of \(T_{\text{best}}(n)\) relative to \(g(n)\).
In the same way, we can prove that it is also a case of bounded function, you may try this out.
Asymptotic Complexity
Asymptotic complexity, or exact complexity, is a broader concept that applies when the best case and worst case complexities of an algorithm are the same. This indicates that the algorithm’s running time remains consistent regardless of the input conditions. When best case and worst case complexities are identical, we can use the Squeeze Theorem to rigorously prove the algorithm’s asymptotic complexity.
The Squeeze (Sandwich) Theorem states that if \( f(n) \) is sandwiched between two functions \( g(n) \) and \( h(n) \) that both converge to the same limit as \( n \) approaches infinity, then \( f(n) \) also converges to that limit. Applying this to time complexity, if both the best case and worst case complexities are \( O(g(n)) \), then the overall complexity is also \( O(g(n)) \).
In summary, the best case time complexity \( T_{\text{best}}(n) \) can be defined similarly to the worst case time complexity, and both can be shown to be equivalent to the boundedness of functions. Furthermore, when these complexities are the same, the concept of asymptotic complexity provides a unified view of an algorithm’s performance, and the Squeeze Theorem can be used to validate this equivalence rigorously. But considering this article is already too long, I will only discuss this in the future, but not here.
An Practical Example of Applying this Equivalence
Now let’s work on a real algorithm analysis problem on bubble sort.
def bubble_sort(arr):
n = len(arr)
# Traverse through all array elements
for i in range(n-1):
# Last i elements are already in place
for j in range(0, n-i-1):
# Traverse the array from 0 to n-i-1
# Swap if the element found is greater
# than the next element
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]
return arr
Prove that Bubble sort is of \(O(n^2)\) worst case time complexity and of \(O(n)\) best case time complexity.
Worst Case Analysis
In the worst case, the array is sorted in descending order. The total running time \( T(n) \) can be expressed as:
\[T(n) = \sum_{i=1}^{n-1} \sum_{j=1}^{n-i} 1 = \sum_{i=1}^{n-1} (n-i) = \frac{(n-1)n}{2}\]
Simplifying, we get:
\[T(n) \leq \frac{n^2}{2}\]
According to the definition of Big-O notation, we need to find a positive constant \( c \) and a natural number \( n_0 \) such that for all \( n > n_0 \): $$T(n) \leq c \cdot n^2$$
Let \( c = \frac{1}{2} \), then:
$$T(n) \leq \frac{n^2}{2} = c \cdot n^2$$
Clearly, this inequality holds for all \( n \geq 1 \). Therefore, we have proven that \(T(n) = O(n^2)\).
Best Case Analysis
In the best case, the array is already sorted in ascending order. The inner loop will not perform any swaps, allowing for an early exit. The running time is linear:
$$T_{\text{best}}(n) = n$$
According to the definition of Big-O notation, we need to find a positive constant \( c \) and a natural number \( n_0 \) such that for all \( n > n_0 \):
\[T_{\text{best}}(n) \geq c \cdot n\]
Let \( c = 1 \), then: $$T_{\text{best}}(n) \geq n = c \cdot n$$
Clearly, this inequality holds for all \( n \geq 1 \). Therefore, we have proven that \(T_{\text{best}}(n) = O(n)\).
Conclusion
To conclude, we reiterate the equivalence of the notion with bounded functions. This proof is equivalent to the concept of boundedness of functions because we have shown that \( T(n) \) is bounded by \( \frac{n^2}{2} \), meaning that there exists a constant \( c = \frac{1}{2} \) such that for all \( n \geq 1 \), \(T(n) \leq c \cdot n^2\)
Similarly, we have shown that \( T_{\text{best}}(n) \) is bounded by \( n \), meaning that there exists a constant \( c = 1 \) such that for all \( n \geq 1 \), \(T_{\text{best}}(n) \geq c \cdot n\).
This demonstrates that the growth rate of \( T(n) \) has an upper bound of \( \frac{1}{2} n^2 \) and the growth rate of \( T_{\text{best}}(n) \) has an lower bound of \( n \), confirming that \( T(n) = O(n^2) \) and \( T_{\text{best}}(n) = O(n) \). Thus, \( T(n) \) must be a bounded function.
Wrapping up
So, there you have it! We’ve taken a deep dive into the world of function boundedness and algorithm complexity. Who knew that these seemingly different concepts were so closely related?
We started with some basic math stuff about bounded functions – you know, those functions that don’t go wild and shoot off to infinity. Then we connected the dots to see how this relates to the big \(O\) notation we use all the time in algorithm analysis. Pretty neat, right?
The cool part is how we can use this knowledge to better understand and analyse algorithms. We even put it to the test with the bubble sort algorithm. Seeing how the best and worst-case scenarios play out in terms of boundedness really brings the theory to real practice.
This is really a fabulous example of how computer science is related to mathematics, and I find it fascinating how pure math concepts can have such practical applications in computer science.
There’s still more to explore, like asymptotic complexity and the squeeze theorem. Maybe we’ll dive into those in a future post. For now, I hope this article has given you a new perspective on algorithm analysis and maybe even sparked some interest in the mathematical side of things.
Remember, the next time someone asks you about big O notation, you can impress them by saying, “Oh, you mean that special case of function boundedness?” Just kidding – maybe save that for your fellow code geeks!
Thanks for sticking with me through all the math and code. If you found this article interesting or have any thoughts to share, drop a comment below. Until next time, happy coding and learning math!