Skip to content

Conversation

@codeflash-ai
Copy link

@codeflash-ai codeflash-ai bot commented Dec 17, 2025

📄 15% (0.15x) speedup for Epoch._cmp in lib/matplotlib/testing/jpl_units/Epoch.py

⏱️ Runtime : 7.68 milliseconds 6.68 milliseconds (best of 49 runs)

📝 Explanation and details

The optimized code achieves a 14% speedup through two key performance improvements:

1. Class-level allowed dictionary (Major optimization)

  • Moved the allowed frame conversion dictionary from instance-level to class-level as a static attribute
  • Why faster: Eliminates per-object memory overhead and provides O(1) access without attribute lookup overhead during convert() calls
  • Impact: The line profiler shows convert() improved from 4.92ms to 4.67ms total time, with the return Epoch(...) line dropping from 4.39ms to 4.10ms per hit

2. divmod() optimization in __init__ normalization

  • Replaced separate math.floor(self._seconds / 86400) and modulus operations with a single divmod(sec, 86400.0) call
  • Why faster: divmod() computes both quotient and remainder in one operation, avoiding duplicate division computation and an extra multiplication (deltaDays * 86400.0)
  • Impact: More efficient for the seconds-to-days normalization that handles rollover cases

Performance characteristics from test results:

  • Cross-frame comparisons show 10-20% improvements (most _cmp calls with different frames)
  • Same-frame comparisons show 5-15% improvements
  • Large-scale sequential operations benefit significantly (12-20% faster on batch operations)

The optimizations are particularly effective for:

  • Applications performing many epoch comparisons across time frames (ET/UTC conversions)
  • Batch processing of temporal data where Epoch objects are frequently instantiated
  • Any workload where _cmp() is called repeatedly, as it triggers convert() for cross-frame operations

The changes preserve all existing behavior, validation logic, and API compatibility while reducing both memory footprint and computational overhead.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 🔘 None Found
🌀 Generated Regression Tests 5573 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 🔘 None Found
📊 Tests Coverage 100.0%
🌀 Generated Regression Tests and Runtime
import datetime as DT
import operator

# imports
import pytest
from matplotlib.testing.jpl_units.Epoch import Epoch

# function to test (Epoch._cmp) is assumed to be imported from matplotlib/testing/jpl_units/Epoch.py


# Helper function to create Epochs in various ways for test clarity
def make_epoch(frame, sec=None, jd=None, daynum=None, dt=None):
    return Epoch(frame, sec=sec, jd=jd, daynum=daynum, dt=dt)


# =========================
# BASIC TEST CASES
# =========================


def test_cmp_equal_same_frame_same_jd_seconds():
    # Identical epochs, same frame, same JD, same seconds
    e1 = make_epoch("ET", sec=100, jd=2451545)
    e2 = make_epoch("ET", sec=100, jd=2451545)
    codeflash_output = e1._cmp(operator.eq, e2)  # 1.15μs -> 1.05μs (8.82% faster)
    codeflash_output = not e1._cmp(operator.ne, e2)  # 524ns -> 459ns (14.2% faster)
    codeflash_output = not e1._cmp(operator.lt, e2)  # 424ns -> 367ns (15.5% faster)
    codeflash_output = e1._cmp(operator.le, e2)  # 398ns -> 378ns (5.29% faster)
    codeflash_output = not e1._cmp(operator.gt, e2)  # 371ns -> 321ns (15.6% faster)
    codeflash_output = e1._cmp(operator.ge, e2)  # 390ns -> 341ns (14.4% faster)


def test_cmp_not_equal_same_frame_diff_seconds():
    # Same frame, same JD, different seconds
    e1 = make_epoch("ET", sec=100, jd=2451545)
    e2 = make_epoch("ET", sec=200, jd=2451545)
    codeflash_output = not e1._cmp(operator.eq, e2)  # 1.05μs -> 980ns (6.94% faster)
    codeflash_output = e1._cmp(operator.ne, e2)  # 461ns -> 439ns (5.01% faster)
    codeflash_output = e1._cmp(operator.lt, e2)  # 414ns -> 349ns (18.6% faster)
    codeflash_output = e1._cmp(operator.le, e2)  # 394ns -> 375ns (5.07% faster)
    codeflash_output = not e1._cmp(operator.gt, e2)  # 342ns -> 318ns (7.55% faster)
    codeflash_output = not e1._cmp(operator.ge, e2)  # 383ns -> 344ns (11.3% faster)


def test_cmp_not_equal_same_frame_diff_jd():
    # Same frame, different JD, same seconds
    e1 = make_epoch("ET", sec=100, jd=2451545)
    e2 = make_epoch("ET", sec=100, jd=2451546)
    codeflash_output = not e1._cmp(operator.eq, e2)  # 997ns -> 1.08μs (7.69% slower)
    codeflash_output = e1._cmp(operator.ne, e2)  # 465ns -> 427ns (8.90% faster)
    codeflash_output = e1._cmp(operator.lt, e2)  # 407ns -> 363ns (12.1% faster)
    codeflash_output = e1._cmp(operator.le, e2)  # 391ns -> 373ns (4.83% faster)
    codeflash_output = not e1._cmp(operator.gt, e2)  # 342ns -> 338ns (1.18% faster)
    codeflash_output = not e1._cmp(operator.ge, e2)  # 377ns -> 336ns (12.2% faster)


def test_cmp_equal_different_input_types():
    # Same epoch, constructed via different input types (daynum, dt)
    dt = DT.datetime(2000, 1, 1, 12, 0, 0)
    e1 = make_epoch("ET", sec=0, jd=2451545.0)
    e2 = make_epoch("ET", daynum=730120.0)  # 2451545.5 in JD
    e3 = make_epoch("ET", dt=DT.datetime(2000, 1, 1, 12, 0, 0))
    # e2 and e3 are at 12:00, e1 is at midnight, so e1 < e2 == e3
    codeflash_output = e2._cmp(operator.eq, e3)  # 1.17μs -> 1.08μs (8.62% faster)
    codeflash_output = e1._cmp(operator.lt, e2)  # 721ns -> 605ns (19.2% faster)
    codeflash_output = e1._cmp(operator.lt, e3)  # 488ns -> 392ns (24.5% faster)


# =========================
# EDGE TEST CASES
# =========================


def test_cmp_cross_frame_equivalence():
    # Compare two epochs at the same instant but different frames
    # ET is ahead of UTC by 64.1839s
    jd = 2451545
    sec_utc = 100
    sec_et = sec_utc + 64.1839
    e_utc = make_epoch("UTC", sec=sec_utc, jd=jd)
    e_et = make_epoch("ET", sec=sec_et, jd=jd)
    # They represent the same instant
    codeflash_output = e_utc._cmp(operator.eq, e_et)  # 3.56μs -> 3.34μs (6.67% faster)
    codeflash_output = not e_utc._cmp(
        operator.ne, e_et
    )  # 1.97μs -> 1.64μs (20.2% faster)
    codeflash_output = not e_utc._cmp(
        operator.lt, e_et
    )  # 1.56μs -> 1.34μs (15.8% faster)
    codeflash_output = e_utc._cmp(operator.le, e_et)  # 1.50μs -> 1.30μs (15.9% faster)
    codeflash_output = not e_utc._cmp(
        operator.gt, e_et
    )  # 1.43μs -> 1.23μs (15.9% faster)
    codeflash_output = e_utc._cmp(operator.ge, e_et)  # 1.45μs -> 1.25μs (15.9% faster)


def test_cmp_cross_frame_ordering():
    # ET > UTC at same JD, same seconds
    jd = 2451545
    e_utc = make_epoch("UTC", sec=100, jd=jd)
    e_et = make_epoch("ET", sec=100, jd=jd)
    # e_et is ahead by 64.1839s
    codeflash_output = e_utc._cmp(operator.lt, e_et)  # 3.50μs -> 3.21μs (9.00% faster)
    codeflash_output = e_et._cmp(operator.gt, e_utc)  # 1.95μs -> 1.74μs (12.3% faster)


def test_cmp_jd_rollover():
    # Test seconds > 86400 causing JD rollover normalization
    # 86400 seconds is one day
    e1 = make_epoch("ET", sec=86400, jd=2451545)  # Should be JD=2451546, sec=0
    e2 = make_epoch("ET", sec=0, jd=2451546)
    codeflash_output = e1._cmp(operator.eq, e2)  # 1.06μs -> 994ns (6.84% faster)
    codeflash_output = not e1._cmp(operator.ne, e2)  # 458ns -> 445ns (2.92% faster)
    codeflash_output = not e1._cmp(operator.lt, e2)  # 415ns -> 322ns (28.9% faster)
    codeflash_output = e1._cmp(operator.le, e2)  # 412ns -> 345ns (19.4% faster)


def test_cmp_negative_seconds():
    # Negative seconds should roll back JD
    e1 = make_epoch("ET", sec=-100, jd=2451546)  # Should be JD=2451545, sec=86300
    e2 = make_epoch("ET", sec=86300, jd=2451545)
    codeflash_output = e1._cmp(operator.eq, e2)  # 1.04μs -> 993ns (4.53% faster)


def test_cmp_max_seconds():
    # Large seconds value, but within a day
    e1 = make_epoch("ET", sec=86399.999, jd=2451545)
    e2 = make_epoch("ET", sec=86399.999, jd=2451545)
    codeflash_output = e1._cmp(operator.eq, e2)  # 1.05μs -> 1.03μs (2.43% faster)
    codeflash_output = not e1._cmp(operator.lt, e2)  # 484ns -> 451ns (7.32% faster)


def test_cmp_min_seconds():
    # Smallest possible seconds (0)
    e1 = make_epoch("ET", sec=0, jd=2451545)
    e2 = make_epoch("ET", sec=0, jd=2451545)
    codeflash_output = e1._cmp(operator.eq, e2)  # 1.02μs -> 942ns (7.96% faster)


def test_cmp_dt_vs_sec_jd():
    # Compare epoch constructed from datetime vs sec/jd
    dt = DT.datetime(2000, 1, 1, 12, 0, 0)
    e1 = make_epoch("ET", dt=dt)
    # 1-Jan-2000 12:00:00 = JD 2451545.5
    e2 = make_epoch("ET", sec=43200, jd=2451545)  # 0.5 days = 43200 seconds
    codeflash_output = e1._cmp(operator.eq, e2)  # 1.08μs -> 1.12μs (3.40% slower)


def test_cmp_invalid_frame_raises():
    # Invalid frame should raise ValueError
    with pytest.raises(ValueError):
        make_epoch("BAD", sec=0, jd=2451545)


def test_cmp_invalid_input_raises():
    # sec without jd
    with pytest.raises(ValueError):
        make_epoch("ET", sec=0)
    # jd without sec
    with pytest.raises(ValueError):
        make_epoch("ET", jd=2451545)
    # daynum and sec
    with pytest.raises(ValueError):
        make_epoch("ET", daynum=1, sec=0)
    # dt not datetime
    with pytest.raises(ValueError):
        make_epoch("ET", dt="not a datetime")


# =========================
# LARGE SCALE TEST CASES
# =========================


@pytest.mark.parametrize("frame", ["ET", "UTC"])
def test_cmp_large_scale_sequential(frame):
    # Create 1000 sequential epochs, check ordering
    base_jd = 2451545
    epochs = [make_epoch(frame, sec=i, jd=base_jd) for i in range(1000)]
    for i in range(999):
        codeflash_output = epochs[i]._cmp(
            operator.lt, epochs[i + 1]
        )  # 623μs -> 554μs (12.4% faster)
        codeflash_output = epochs[i + 1]._cmp(operator.gt, epochs[i])
        codeflash_output = not epochs[i]._cmp(
            operator.eq, epochs[i + 1]
        )  # 605μs -> 545μs (11.1% faster)
        codeflash_output = epochs[i]._cmp(operator.le, epochs[i + 1])
        codeflash_output = epochs[i + 1]._cmp(
            operator.ge, epochs[i]
        )  # 604μs -> 544μs (11.0% faster)


def test_cmp_large_scale_cross_frame():
    # Compare 500 UTC and 500 ET epochs at the same instant
    base_jd = 2451545
    for i in range(500):
        sec_utc = i
        sec_et = i + 64.1839
        e_utc = make_epoch("UTC", sec=sec_utc, jd=base_jd)
        e_et = make_epoch("ET", sec=sec_et, jd=base_jd)
        codeflash_output = e_utc._cmp(
            operator.eq, e_et
        )  # 663μs -> 575μs (15.4% faster)
        codeflash_output = not e_utc._cmp(operator.ne, e_et)


def test_cmp_large_scale_jd_rollover():
    # Test 100 epochs with seconds rolling over to next JD
    base_jd = 2451545
    for i in range(100):
        sec = 86400 + i  # Should roll over to JD+1, sec=i
        e1 = make_epoch("ET", sec=sec, jd=base_jd)
        e2 = make_epoch("ET", sec=i, jd=base_jd + 1)
        codeflash_output = e1._cmp(operator.eq, e2)  # 30.7μs -> 28.0μs (9.65% faster)


def test_cmp_large_scale_randomized():
    # Test random ordering of 100 epochs
    import random

    base_jd = 2451545
    indices = list(range(100))
    random.shuffle(indices)
    epochs = [make_epoch("ET", sec=i, jd=base_jd) for i in indices]
    sorted_epochs = sorted(epochs, key=lambda e: (e._jd, e._seconds))
    for i in range(99):
        codeflash_output = sorted_epochs[i]._cmp(
            operator.lt, sorted_epochs[i + 1]
        )  # 30.7μs -> 26.6μs (15.5% faster)


def test_cmp_large_scale_dt():
    # Compare 100 epochs constructed from datetime
    dt_base = DT.datetime(2000, 1, 1, 0, 0, 0)
    epochs = [
        make_epoch("ET", dt=dt_base + DT.timedelta(seconds=i)) for i in range(100)
    ]
    for i in range(99):
        codeflash_output = epochs[i]._cmp(
            operator.lt, epochs[i + 1]
        )  # 30.9μs -> 27.5μs (12.3% faster)
        codeflash_output = epochs[i + 1]._cmp(operator.gt, epochs[i])


# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.
import datetime as DT
import operator

# imports
import pytest
from matplotlib.testing.jpl_units.Epoch import Epoch

# function to test (Epoch._cmp) is assumed to be already imported as per instructions


# Helper: create Epochs easily
def make_epoch(frame, sec=None, jd=None, daynum=None, dt=None):
    return Epoch(frame, sec=sec, jd=jd, daynum=daynum, dt=dt)


# =====================
# 1. BASIC TEST CASES
# =====================


def test_cmp_equal_same_frame_same_jd_sec():
    # Two epochs with same frame, jd, and seconds should be equal
    e1 = make_epoch("ET", sec=1000, jd=2451545)
    e2 = make_epoch("ET", sec=1000, jd=2451545)
    codeflash_output = e1._cmp(operator.eq, e2)  # 1.20μs -> 1.04μs (15.2% faster)
    codeflash_output = not e1._cmp(operator.ne, e2)  # 528ns -> 462ns (14.3% faster)
    codeflash_output = not e1._cmp(operator.lt, e2)  # 437ns -> 383ns (14.1% faster)
    codeflash_output = e1._cmp(operator.le, e2)  # 423ns -> 381ns (11.0% faster)
    codeflash_output = not e1._cmp(operator.gt, e2)  # 331ns -> 336ns (1.49% slower)
    codeflash_output = e1._cmp(operator.ge, e2)  # 380ns -> 344ns (10.5% faster)


def test_cmp_not_equal_same_frame():
    # Two epochs with same frame and jd, but different seconds
    e1 = make_epoch("ET", sec=1000, jd=2451545)
    e2 = make_epoch("ET", sec=2000, jd=2451545)
    codeflash_output = not e1._cmp(operator.eq, e2)  # 954ns -> 963ns (0.935% slower)
    codeflash_output = e1._cmp(operator.ne, e2)  # 486ns -> 437ns (11.2% faster)
    codeflash_output = e1._cmp(operator.lt, e2)  # 431ns -> 369ns (16.8% faster)
    codeflash_output = e1._cmp(operator.le, e2)  # 375ns -> 365ns (2.74% faster)
    codeflash_output = not e1._cmp(operator.gt, e2)  # 361ns -> 331ns (9.06% faster)
    codeflash_output = not e1._cmp(operator.ge, e2)  # 378ns -> 354ns (6.78% faster)


def test_cmp_not_equal_different_jd():
    # Two epochs with same frame, but different jd
    e1 = make_epoch("ET", sec=1000, jd=2451545)
    e2 = make_epoch("ET", sec=1000, jd=2451546)
    codeflash_output = not e1._cmp(operator.eq, e2)  # 1.04μs -> 958ns (8.66% faster)
    codeflash_output = e1._cmp(operator.ne, e2)  # 478ns -> 414ns (15.5% faster)
    codeflash_output = e1._cmp(operator.lt, e2)  # 392ns -> 346ns (13.3% faster)
    codeflash_output = e1._cmp(operator.le, e2)  # 404ns -> 381ns (6.04% faster)
    codeflash_output = not e1._cmp(operator.gt, e2)  # 347ns -> 316ns (9.81% faster)
    codeflash_output = not e1._cmp(operator.ge, e2)  # 374ns -> 337ns (11.0% faster)


def test_cmp_different_frames_simple():
    # Compare two epochs with different frames, same jd/sec, should convert
    e1 = make_epoch("ET", sec=1000, jd=2451545)
    # e2 is 1000 seconds in UTC, which is ET - 64.1839
    e2 = make_epoch("UTC", sec=1000, jd=2451545)
    # e1 in UTC is 1000 - 64.1839 = 935.8161, so e1 > e2
    codeflash_output = not e1._cmp(operator.eq, e2)  # 3.67μs -> 3.27μs (12.0% faster)
    codeflash_output = e1._cmp(operator.gt, e2)  # 1.77μs -> 1.64μs (7.74% faster)
    codeflash_output = not e1._cmp(operator.lt, e2)  # 1.55μs -> 1.38μs (12.5% faster)


def test_cmp_different_frames_equal_after_conversion():
    # e1 in ET, e2 in UTC, but e2 is offset so they should be equal
    e1 = make_epoch("ET", sec=1000, jd=2451545)
    # e2 in UTC is ET - 64.1839, so to be equal, sec must be 1000 - 64.1839
    e2 = make_epoch("UTC", sec=1000 - 64.1839, jd=2451545)
    codeflash_output = e1._cmp(operator.eq, e2)  # 3.54μs -> 3.20μs (10.6% faster)
    codeflash_output = not e1._cmp(operator.ne, e2)  # 1.75μs -> 1.70μs (2.46% faster)


def test_cmp_daynum_and_sec_jd_equivalence():
    # Epochs constructed from daynum and from sec/jd that are equivalent
    # 1-Jan-2000 12:00:00 is JD 2451545.0, which is daynum 730119.5
    e1 = make_epoch("ET", daynum=730119.5)
    e2 = make_epoch("ET", sec=0, jd=2451545)
    codeflash_output = e1._cmp(operator.eq, e2)  # 1.22μs -> 1.10μs (10.5% faster)


def test_cmp_dt_and_sec_jd_equivalence():
    # Epochs constructed from datetime and sec/jd that are equivalent
    dt = DT.datetime(2000, 1, 1, 12, 0, 0)
    e1 = make_epoch("ET", dt=dt)
    e2 = make_epoch("ET", sec=0, jd=2451545)
    codeflash_output = e1._cmp(operator.eq, e2)  # 1.22μs -> 1.15μs (5.98% faster)


# =====================
# 2. EDGE TEST CASES
# =====================


def test_cmp_near_jd_rollover():
    # Test when seconds nearly roll over to next JD
    e1 = make_epoch("ET", sec=86399.999, jd=2451545)
    e2 = make_epoch("ET", sec=0, jd=2451546)
    # e1 is just before e2
    codeflash_output = e1._cmp(operator.lt, e2)  # 1.05μs -> 998ns (5.11% faster)
    codeflash_output = not e1._cmp(operator.eq, e2)  # 487ns -> 450ns (8.22% faster)
    codeflash_output = e2._cmp(operator.gt, e1)  # 361ns -> 351ns (2.85% faster)


def test_cmp_negative_seconds():
    # Test with negative seconds (should be normalized in constructor)
    e1 = make_epoch("ET", sec=-1, jd=2451546)
    e2 = make_epoch("ET", sec=86399, jd=2451545)
    # Both represent the same instant (since -1 sec at 2451546 == 86399 sec at 2451545)
    codeflash_output = e1._cmp(operator.eq, e2)  # 1.11μs -> 984ns (13.1% faster)


def test_cmp_large_seconds():
    # Test with seconds > 86400 (should be normalized)
    e1 = make_epoch("ET", sec=86401, jd=2451545)
    # 86401 sec = 1 day + 1 sec, so should be jd=2451546, sec=1
    e2 = make_epoch("ET", sec=1, jd=2451546)
    codeflash_output = e1._cmp(operator.eq, e2)  # 1.08μs -> 983ns (9.46% faster)


def test_cmp_minimal_difference():
    # Test with minimal difference in seconds
    e1 = make_epoch("ET", sec=1000, jd=2451545)
    e2 = make_epoch("ET", sec=1000.000001, jd=2451545)
    codeflash_output = e1._cmp(operator.lt, e2)  # 1.10μs -> 957ns (15.4% faster)
    codeflash_output = not e1._cmp(operator.eq, e2)  # 505ns -> 462ns (9.31% faster)


def test_cmp_different_frames_jd_difference():
    # e1 and e2 are in different frames and different JDs
    e1 = make_epoch("ET", sec=1000, jd=2451545)
    e2 = make_epoch("UTC", sec=1000, jd=2451546)
    # e1 in UTC is 935.8161, but JD is still less than e2
    codeflash_output = e1._cmp(operator.lt, e2)  # 3.73μs -> 3.26μs (14.3% faster)
    codeflash_output = not e1._cmp(operator.eq, e2)  # 1.93μs -> 1.72μs (11.9% faster)


def test_cmp_different_frames_edge_offset():
    # e1 in ET at sec=64.1839, e2 in UTC at sec=0, should be equal after conversion
    e1 = make_epoch("ET", sec=64.1839, jd=2451545)
    e2 = make_epoch("UTC", sec=0, jd=2451545)
    codeflash_output = e1._cmp(operator.eq, e2)  # 3.65μs -> 3.31μs (10.2% faster)


def test_cmp_extreme_jd_values():
    # Test with very large and very small JD values
    e1 = make_epoch("ET", sec=0, jd=1e6)
    e2 = make_epoch("ET", sec=0, jd=1e6 + 1)
    codeflash_output = e1._cmp(operator.lt, e2)  # 1.08μs -> 998ns (8.02% faster)
    codeflash_output = not e2._cmp(operator.lt, e1)  # 440ns -> 394ns (11.7% faster)


def test_cmp_invalid_type():
    # Passing something that's not an Epoch should raise AttributeError
    e1 = make_epoch("ET", sec=0, jd=2451545)
    with pytest.raises(AttributeError):
        e1._cmp(operator.eq, "not_an_epoch")  # 1.68μs -> 1.69μs (0.593% slower)


def test_cmp_different_frames_rounding():
    # Test for rounding errors in frame conversion
    e1 = make_epoch("ET", sec=1e-10, jd=2451545)
    e2 = make_epoch("UTC", sec=1e-10 - 64.1839, jd=2451545)
    codeflash_output = e1._cmp(operator.eq, e2)  # 3.67μs -> 3.32μs (10.8% faster)


# =====================
# 3. LARGE SCALE TEST CASES
# =====================


@pytest.mark.parametrize("delta", [i for i in range(0, 1000, 100)])  # up to 900 seconds
def test_cmp_many_epochs_linear(delta):
    # Compare many pairs of epochs with increasing seconds
    base = make_epoch("ET", sec=0, jd=2451545)
    e = make_epoch("ET", sec=delta, jd=2451545)
    if delta == 0:
        codeflash_output = base._cmp(operator.eq, e)  # 11.0μs -> 9.30μs (18.0% faster)
    else:
        codeflash_output = base._cmp(operator.lt, e)
        codeflash_output = e._cmp(operator.gt, base)


def test_cmp_large_batch_all_equal():
    # All epochs in the batch are equal
    epochs = [make_epoch("ET", sec=500, jd=2451545) for _ in range(100)]
    for i in range(len(epochs)):
        for j in range(len(epochs)):
            codeflash_output = epochs[i]._cmp(operator.eq, epochs[j])


def test_cmp_large_batch_strictly_increasing():
    # Each epoch is strictly greater than the previous
    epochs = [make_epoch("ET", sec=i, jd=2451545) for i in range(100)]
    for i in range(len(epochs) - 1):
        codeflash_output = epochs[i]._cmp(
            operator.lt, epochs[i + 1]
        )  # 32.3μs -> 26.9μs (20.1% faster)
        codeflash_output = epochs[i + 1]._cmp(operator.gt, epochs[i])


def test_cmp_large_batch_different_frames():
    # Compare large batch of ET and UTC epochs with proper offset
    for i in range(0, 1000, 100):
        e_et = make_epoch("ET", sec=i, jd=2451545)
        e_utc = make_epoch("UTC", sec=i - 64.1839, jd=2451545)
        codeflash_output = e_et._cmp(
            operator.eq, e_utc
        )  # 16.2μs -> 14.3μs (12.7% faster)


def test_cmp_large_jd_range():
    # Compare epochs with large differences in JD
    e1 = make_epoch("ET", sec=0, jd=1)
    e2 = make_epoch("ET", sec=0, jd=1000)
    codeflash_output = e1._cmp(operator.lt, e2)  # 1.09μs -> 996ns (9.34% faster)
    codeflash_output = e2._cmp(operator.gt, e1)  # 413ns -> 388ns (6.44% faster)


def test_cmp_large_scale_minimal_diff():
    # Many epochs with minimal difference in seconds
    base = make_epoch("ET", sec=0, jd=2451545)
    for i in range(1, 100):
        e = make_epoch("ET", sec=i * 1e-6, jd=2451545)
        codeflash_output = base._cmp(operator.lt, e)  # 33.1μs -> 29.2μs (13.4% faster)
        codeflash_output = e._cmp(operator.gt, base)
        codeflash_output = not base._cmp(
            operator.eq, e
        )  # 31.1μs -> 27.1μs (14.8% faster)


# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.

To edit these changes git checkout codeflash/optimize-Epoch._cmp-mj9vpssi and push.

Codeflash Static Badge

The optimized code achieves a **14% speedup** through two key performance improvements:

**1. Class-level `allowed` dictionary (Major optimization)**
- Moved the `allowed` frame conversion dictionary from instance-level to class-level as a static attribute
- **Why faster**: Eliminates per-object memory overhead and provides O(1) access without attribute lookup overhead during `convert()` calls
- **Impact**: The line profiler shows `convert()` improved from 4.92ms to 4.67ms total time, with the `return Epoch(...)` line dropping from 4.39ms to 4.10ms per hit

**2. `divmod()` optimization in `__init__` normalization**  
- Replaced separate `math.floor(self._seconds / 86400)` and modulus operations with a single `divmod(sec, 86400.0)` call
- **Why faster**: `divmod()` computes both quotient and remainder in one operation, avoiding duplicate division computation and an extra multiplication (`deltaDays * 86400.0`)
- **Impact**: More efficient for the seconds-to-days normalization that handles rollover cases

**Performance characteristics from test results:**
- **Cross-frame comparisons** show 10-20% improvements (most `_cmp` calls with different frames)
- **Same-frame comparisons** show 5-15% improvements  
- **Large-scale sequential operations** benefit significantly (12-20% faster on batch operations)

The optimizations are particularly effective for:
- Applications performing many epoch comparisons across time frames (ET/UTC conversions)
- Batch processing of temporal data where `Epoch` objects are frequently instantiated
- Any workload where `_cmp()` is called repeatedly, as it triggers `convert()` for cross-frame operations

The changes preserve all existing behavior, validation logic, and API compatibility while reducing both memory footprint and computational overhead.
@codeflash-ai codeflash-ai bot requested a review from mashraf-222 December 17, 2025 10:39
@codeflash-ai codeflash-ai bot added ⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: High Optimization Quality according to Codeflash labels Dec 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: High Optimization Quality according to Codeflash

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants