Sum of Series 1+1/2+1/3+... in C, C++, Java & Python – Code with Explanation & Examples in Short and Simple

  

C Program

#include <stdio.h>
int main() {
    int n, i;
    double sum = 0;
    printf("Enter n: ");
    scanf("%d", &n);
    for(i = 1; i <= n; i++)
        sum += 1.0 / i;
    printf("Sum = %.2f\n", sum);
    return 0;
}

C Output

Input:
Enter n: 4

Output:
Sum = 2.08



C++ Program

#include <iostream>
using namespace std;
int main() {
    int n;
    double sum = 0;
    cout << "Enter n: ";
    cin >> n;
    for(int i = 1; i <= n; i++)
        sum += 1.0 / i;
    cout << "Sum = " << sum << endl;
}

C++ Output

Input:
Enter n: 5

Output:
Sum = 2.28333



JAVA Program

import java.util.Scanner;
class Main {
    public static void main(String[] args) {
        Scanner sc = new Scanner(System.in);
        int n = sc.nextInt();
        double sum = 0;
        for(int i = 1; i <= n; i++)
            sum += 1.0 / i;
        System.out.printf("Sum = %.4f\n", sum);
    }
}

JAVA Output

Input:
6

Output:
Sum = 2.4500



Python Program

n = int(input("Enter n: "))
sum_series = sum(1/i for i in range(1, n+1))
print(f"Sum = {sum_series:.5f}")

Python Output

Input:
7

Output:
Sum = 2.59286


In-Depth Learning – Entire Concept in Paragraphs

Example

For a concrete feel, take n=7n=7. The sum is 1+12+13+14+15+16+171 + \tfrac12 + \tfrac13 + \tfrac14 + \tfrac15 + \tfrac16 + \tfrac17. Numerically this is about 2.59285714282.5928571428. If you push to n=10n=10, you get 2.92896825392.9289682539. Notice how each extra term adds less than the one before, yet the total keeps creeping upward.


Real-Life Analogy

Imagine filling a bucket where the first pour is a full liter, the next is half a liter, the next is a third of a liter, and so on. Every pour is smaller than the previous one, so your bucket fills more and more slowly. It feels like you might eventually top off, but you never truly finish because there’s always a next fraction to add. The bucket’s level keeps rising, just at a snail’s pace — that’s the harmonic series in a nutshell.


What the Series Is (Concept)

This is the nth harmonic number, denoted HnH_n:

Hn  =  k=1n1k.H_n \;=\; \sum_{k=1}^{n}\frac{1}{k}.

It grows without bound (it diverges) as nn increases, but extremely slowly. In fact, its growth is roughly like the natural logarithm.


Why It Diverges (The Classic Intuition)

A beloved proof groups terms so that each block adds at least 12\tfrac12. After the first term 11, take the next two: 12+13>12\tfrac12+\tfrac13 > \tfrac12. The next four: 14+15+16+17>4×18=12\tfrac14+\tfrac15+\tfrac16+\tfrac17 > 4\times\tfrac18=\tfrac12. The next eight contribute more than another 12\tfrac12, and so on. You keep stacking 12\tfrac12 chunks forever, so the sum keeps growing beyond any fixed bound. This is a wonderfully concrete way to feel divergence.


Growth Rate, Approximations, and the “Magic Constant”

Although HnH_n diverges, we can estimate it very accurately:

Hnlnn+γ+12n112n2+1120n4    H_n \approx \ln n + \gamma + \frac{1}{2n} - \frac{1}{12n^2} + \frac{1}{120n^4} \;-\; \cdots

Here γ0.5772156649\gamma \approx 0.5772156649 is the Euler–Mascheroni constant. Even keeping just the first correction or two is shockingly good. For example:

  • H10H_{10} is 2.92896825392.9289682539. The approximation ln(10)+γ+12011200\ln(10)+\gamma+\tfrac{1}{20}-\tfrac{1}{1200} hits 2.92896825792.9289682579, already correct to many decimal places.

  • H100H_{100} is about 5.187377517645.18737751764, and the same approximation lands essentially on top of it.

There’s also a neat, easy bound you can remember:

lnn+γ+12(n+1)  <  Hn  <  lnn+γ+12n.\ln n + \gamma + \frac{1}{2(n+1)} \;<\; H_n \;<\; \ln n + \gamma + \frac{1}{2n}.

So HnH_n is always squeezed tightly between these two expressions.


Mathematical Deep Dive (But Friendly)

The harmonic numbers connect to several pillars of math. Using calculus, 1n1xdx=lnn\int_1^n \tfrac{1}{x}\,dx = \ln n shows why HnH_n behaves like lnn\ln n. In advanced analysis, HnH_n relates to the digamma function ψ\psi: Hn=ψ(n+1)+γH_n = \psi(n+1) + \gamma. In number theory and algorithms, harmonic numbers pop up anytime you’re summing reciprocals or analyzing processes that “thin out” as they progress.


Accuracy and Precision in Code

When you compute HnH_n in code, you’re adding a lot of small fractions to a growing sum. On a computer, that can lose precision because floating-point numbers have limited detail. Two practical tips dramatically improve accuracy without changing the algorithmic idea:

Sum from small to large. Add terms in reverse order — from 1/n1/n up to 1/11/1. Small terms get their fair say before the sum grows big, so rounding hurts less.

Use compensated summation. Techniques like Kahan summation keep track of tiny lost bits and feed them back into the sum. You still loop once, but you carry a small “correction” variable that improves accuracy a lot, especially for large nn.

If you just need a fast estimate for very large nn (say n107n \ge 10^7), using the asymptotic formula lnn+γ+12n112n2\ln n + \gamma + \tfrac{1}{2n} - \tfrac{1}{12n^2} is both quick and surprisingly precise. If you need many decimal digits exactly, consider high-precision arithmetic (like Java’s BigDecimal or Python’s decimal) and the small-to-large order or pairwise summation.


Performance and Complexity

A straightforward loop is O(n) time and O(1) extra space. That’s perfect for small or moderate nn. For very large nn, you either switch to the logarithmic approximation or split the range across threads and combine the partial sums with pairwise summation to keep accuracy. Just remember floating-point addition is not strictly associative, so order matters for the last few digits.


Patterns to Notice (Learning Insights)

The sequence HnH_n is monotonically increasing and concave: each new term is smaller than the one before, and the increments Hn+1Hn=1n+1H_{n+1}-H_n=\tfrac{1}{n+1} shrink steadily. That single fact, “the increments shrink like 1/n1/n,” is the main reason you see logarithms everywhere harmonic numbers appear. You’ll start recognizing “there’s a hidden lnn\ln n here” whenever a process gives you a sum of reciprocals.


Alternating Cousin and Nearby Friends

If you flip the signs to 112+1314+1 - \tfrac12 + \tfrac13 - \tfrac14 + \cdots, the alternating harmonic series actually converges to ln2\ln 2. Same building blocks, utterly different behavior because signs cancel. That contrast is a great lesson: convergence isn’t just about term size; the pattern of terms matters too.


Common Pitfalls and How to Avoid Them

A frequent beginner mistake is to write sum += 1 / i with i as an integer, which performs integer division in C/C++/Java and produces zeros after i>1i>1. Force floating-point with 1.0/i or by making sum a double and i a double in the division. Another pitfall is to print too few decimals and think the math is wrong; choose an appropriate format (like %.6f or more) to see accurate results. For very big nn, assuming a loop is the only way is also a trap — the asymptotic formula is your friend.


Why It Matters (Real Projects)

Harmonic numbers quietly power a lot of analyses and designs:

  • Algorithms & data structures. The expected number of comparisons in QuickSort is 2nlnn\approx 2n\ln n (via harmonic numbers). The greedy set cover algorithm has an HnH_n approximation factor. The coupon collector problem’s expected time is nHnn\,H_n. Load balancing and randomized algorithms regularly output HnH_n terms.

  • Systems & networks. Backoff-style protocols, resource sharing, and performance models often simplify to sums of reciprocals, giving you lnn\ln n behavior when users or requests scale.

  • Probability & statistics. Normalization constants for heavy-tailed, Zipf-like phenomena use harmonic numbers, and you’ll see HnH_n in expectations where you “wait for rarer and rarer events.”


Interview Angles You’ll Actually See

Interviewers love the harmonic series because it bridges math intuition and coding hygiene. Variants include summing with better precision, proving the divergence informally by grouping, deriving a tight bound, or applying the approximation to estimate performance quickly. A polished answer mentions integer-vs-floating division, summation order, and at least one accurate approximation with lnn+γ\ln n + \gamma.


Extensions You Can Try Next

Two natural upgrades are (1) compute HnH_n accurately for large n using reverse order or Kahan summation, and compare against lnn+γ\ln n + \gamma; and (2) compute the generalized harmonic number Hn(p)=k=1n1/kpH_n^{(p)}=\sum_{k=1}^{n}1/k^p and observe how it does converge for p>1p>1 (e.g., p=2p=2 tends to π2/6\pi^2/6). These reveal how a tiny change in the exponent dramatically changes convergence.


SEO-friendly wrap-up

The harmonic series 1+1/2+1/3++1/n1 + 1/2 + 1/3 + \dots + 1/n is a foundational topic that blends elegant mathematics with practical programming. Understanding why it diverges, how it grows like lnn\ln n, and how to compute it accurately using floating-point best practices equips beginners to analyze algorithms, avoid precision bugs, and reason about performance at scale. With concepts like the Euler–Mascheroni constant, asymptotic expansions, integral bounds, and real-world uses from QuickSort to coupon collector, mastering harmonic numbers builds deep intuition for series, limits, and numerical computation that pays off in coding interviews, competitive programming, and everyday software engineering.