Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Nov 30, 2025

Implements a pandas DataFrame accessor mirroring pandas-ta's API, enabling direct calculation and appending of technical indicators to DataFrames.

Changes

  • src/numta/pandas_ext.py - New TAAccessor class with:

    • Case-insensitive OHLCV column auto-detection (open, Open, o, etc.)
    • append=True/False parameter to append columns or return Series/DataFrame
    • All 136+ indicators as methods (overlap, momentum, volume, volatility, cycle, statistics, math, price transforms, 60+ candlestick patterns)
    • TA-Lib/pandas-ta column naming conventions (SMA_20, MACD_12_26_9, BBU_20_2.0)
  • src/numta/__init__.py - Auto-registers accessor on import

  • pyproject.toml - Added pandas optional dependency

  • tests/test_pandas_ext.py - 38 tests covering all indicator categories

  • README.md - Pandas Integration documentation section

Usage

import pandas as pd
import numta

df = pd.read_csv("prices.csv")

# Return Series
sma = df.ta.sma(timeperiod=20)

# Append to DataFrame
df.ta.sma(timeperiod=20, append=True)   # Adds 'SMA_20'
df.ta.macd(append=True)                  # Adds 'MACD_12_26_9', 'MACDSignal_12_26_9', 'MACDHist_12_26_9'
df.ta.cdldoji(append=True)               # Adds 'CDLDOJI'

# Custom column
df.ta.sma(column='adj_close', timeperiod=20, append=True)

Install with: pip install "numta[pandas]"

Original prompt

Overview

Implement a pandas DataFrame extension accessor (.ta) to mirror the functionality found in pandas-ta. This allows seamless calculation and appending of technical indicators directly to a Pandas DataFrame, ensuring a modern, frictionless user experience for algorithmic trading workflows.


Motivation

  • Modernizes user experience—no need for manual array transformations.
  • Direct integration into pandas for data scientists and practitioners.
  • Reduces friction for users migrating from pandas-ta (which is abandoned since 2019).

Current Codebase Context

Project Structure

numta/
├── src/
│   └── numta/
│       ├── __init__.py           # Main package exports (all indicators listed here)
│       ├── backend.py            # Backend selection logic
│       ├── benchmark.py          # Performance measurement tools
│       ├── optimized.py          # Optimized implementations
│       ├── api/                   # Public API layer
│       │   ├── overlap.py        # Overlap studies (SMA, EMA, BBANDS, DEMA, TEMA, TRIMA, WMA, KAMA, MAMA, T3, SAR, SAREXT)
│       │   ├── momentum_indicators.py  # ADX, ADXR, APO, AROON, AROONOSC, ATR, BOP, CCI, CMO, DX, MACD, MACDEXT, MACDFIX, MFI, MINUS_DI, MINUS_DM, MOM, PLUS_DI, PLUS_DM, PPO, ROC, ROCP, ROCR, ROCR100, RSI, STOCH, STOCHF, STOCHRSI, TRIX, ULTOSC, WILLR
│       │   ├── volume_indicators.py    # AD, OBV, ADOSC
│       │   ├── volatility_indicators.py # NATR, TRANGE
│       │   ├── cycle_indicators.py     # HT_DCPERIOD, HT_DCPHASE, HT_PHASOR, HT_SINE, HT_TRENDLINE, HT_TRENDMODE
│       │   ├── statistic_functions.py  # BETA, CORREL, LINEARREG, LINEARREG_ANGLE, LINEARREG_INTERCEPT, LINEARREG_SLOPE
│       │   ├── statistics.py           # STDDEV, TSF, VAR
│       │   ├── math_operators.py       # MAX, MAXINDEX, MIN, MININDEX, MINMAX, MINMAXINDEX, SUM
│       │   ├── price_transform.py      # MEDPRICE, MIDPOINT, MIDPRICE, TYPPRICE, WCLPRICE
│       │   └── pattern_recognition.py  # 60+ candlestick patterns (CDL2CROWS, CDL3BLACKCROWS, CDLDOJI, CDLENGULFING, CDLHAMMER, CDLMARUBOZU, etc.)
│       └── cpu/                   # CPU/Numba implementations
├── tests/                         # Test suite
├── pyproject.toml
└── README.md

Current Indicator Signatures (from src/numta/api/overlap.py)

def SMA(close: Union[np.ndarray, list], timeperiod: int = 30) -> np.ndarray:
def EMA(close: Union[np.ndarray, list], timeperiod: int = 30) -> np.ndarray:
def BBANDS(close: Union[np.ndarray, list], timeperiod: int = 5, nbdevup: float = 2.0, nbdevdn: float = 2.0, matype: int = 0) -> tuple:
def DEMA(close: Union[np.ndarray, list], timeperiod: int = 30) -> np.ndarray:
def KAMA(close: Union[np.ndarray, list], timeperiod: int = 30) -> np.ndarray:
def MA(close: Union[np.ndarray, list], timeperiod: int = 30, matype: int = 0) -> np.ndarray:
def MAMA(close: Union[np.ndarray, list], fastlimit: float = 0.5, slowlimit: float = 0.05) -> tuple:
def SAR(high: Union[np.ndarray, list], low: Union[np.ndarray, list], acceleration: float = 0.02, maximum: float = 0.2) -> np.ndarray:
def SAREXT(high: Union[np.ndarray, list], low: Union[np.ndarray, list], ...) -> np.ndarray:
def TEMA(data: Union[np.ndarray, list], timeperiod: int = 30) -> np.ndarray:
def T3(data: Union[np.ndarray, list], timeperiod: int = 5, vfactor: float = 0.7) -> np.ndarray:
def TRIMA(data: Union[np.ndarray, list], timeperiod: int = 30) -> np.ndarray:
def WMA(data: Union[np.ndarray, list], timeperiod: int = 30) -> np.ndarray:

All Available Indicators (from src/numta/init.py)

# Overlap Studies
SMA, EMA, DEMA, BBANDS, KAMA, MA, MAMA, SAR, SAREXT, T3, TEMA, TRIMA, WMA

# Momentum Indicators
ADX, ADXR, APO, AROON, AROONOSC, ATR, BOP, CCI, CMO, DX, MACD, MACDEXT, MACDFIX, MFI, MINUS_DI, MINUS_DM, MOM, PLUS_DI, PLUS_DM, PPO, ROC, ROCP, ROCR, ROCR100, RSI, STOCH, STOCHF, STOCHRSI, TRIX, ULTOSC, WILLR

# Volume Indicators
AD, OBV, ADOSC

# Volatility Indicators  
NATR, TRANGE

# Cycle Indicators
HT_DCPERIOD, HT_DCPHASE, HT_PHASOR, HT_SINE, HT_TRENDLINE, HT_TRENDMODE

# Statistic Functions
BETA, CORREL, LINEARREG, LINEARREG_ANGLE, LINEARREG_INTERCEPT, LINEARREG_SLOPE, STDDEV, TSF, VAR

# Math Operators
MAX, MAXINDEX, MIN, MININDEX, MINMAX, MINMAXINDEX, SUM

# Price Transform
MEDPRICE, MIDPOINT, MIDPRICE, TYPPRICE, WCLPRICE

# Pattern Recognition (60+ patterns)
CDL2CROWS, CDL3BLACKCROWS, CDL3INSIDE, CDL3OUTSIDE, CDL3STARSINSOUTH, CDL3WHITESOLDIERS,
CDLABANDONEDBABY, CDLADVANCEBLOCK, CDLBELTHOLD, CDLBREAKAWAY, CDLCLOSINGMARUBOZU, 
CDLCONCEALBABYSWALL, CDLCOUNTERATTACK, CDLDARKCLOUDCOVER, CDLDOJI, CDLDOJISTAR,
CDLDRAGONFLYDOJI, CDLENGULFING, CDLEVENINGDOJISTAR, CDLEVENINGSTAR, CDLGAPSIDESIDEWHITE, 
CDLGRAVESTONEDOJI, CDLHAMMER, CDLHANGINGMAN, CDLHARAMI, CDLHARAMICROSS, CDLHIGHWAVE,
CDLHIKKAKE, CDLHIKKAKEMOD, CDLHOMINGPIGEON, CDLIDENTICAL3CROWS, CDLINNECK,
CDLINVERTEDHAMMER, CDLKICKING, CDLKICKINGBYLENGTH, CDLLADDERBOTTOM, CDLLONGLEGGEDDOJI, 
CDLLONGLINE, CDLMARUBOZU, CDLMATCHINGLOW,...

</details>

*This pull request was created as a result of the following prompt from Copilot chat.*
> ## Overview
> Implement a pandas DataFrame extension accessor (`.ta`) to mirror the functionality found in pandas-ta. This allows seamless calculation and appending of technical indicators directly to a Pandas DataFrame, ensuring a modern, frictionless user experience for algorithmic trading workflows.
> 
> ---
> 
> ## Motivation
> - Modernizes user experienceno need for manual array transformations.
> - Direct integration into pandas for data scientists and practitioners.
> - Reduces friction for users migrating from pandas-ta (which is abandoned since 2019).
> 
> ## Current Codebase Context
> 
> ### Project Structure
> ```
> numta/
> ├── src/
> │   └── numta/
> │       ├── __init__.py           # Main package exports (all indicators listed here)
> │       ├── backend.py            # Backend selection logic
> │       ├── benchmark.py          # Performance measurement tools
> │       ├── optimized.py          # Optimized implementations
> │       ├── api/                   # Public API layer
> │       │   ├── overlap.py        # Overlap studies (SMA, EMA, BBANDS, DEMA, TEMA, TRIMA, WMA, KAMA, MAMA, T3, SAR, SAREXT)
> │       │   ├── momentum_indicators.py  # ADX, ADXR, APO, AROON, AROONOSC, ATR, BOP, CCI, CMO, DX, MACD, MACDEXT, MACDFIX, MFI, MINUS_DI, MINUS_DM, MOM, PLUS_DI, PLUS_DM, PPO, ROC, ROCP, ROCR, ROCR100, RSI, STOCH, STOCHF, STOCHRSI, TRIX, ULTOSC, WILLR
> │       │   ├── volume_indicators.py    # AD, OBV, ADOSC
> │       │   ├── volatility_indicators.py # NATR, TRANGE
> │       │   ├── cycle_indicators.py     # HT_DCPERIOD, HT_DCPHASE, HT_PHASOR, HT_SINE, HT_TRENDLINE, HT_TRENDMODE
> │       │   ├── statistic_functions.py  # BETA, CORREL, LINEARREG, LINEARREG_ANGLE, LINEARREG_INTERCEPT, LINEARREG_SLOPE
> │       │   ├── statistics.py           # STDDEV, TSF, VAR
> │       │   ├── math_operators.py       # MAX, MAXINDEX, MIN, MININDEX, MINMAX, MINMAXINDEX, SUM
> │       │   ├── price_transform.py      # MEDPRICE, MIDPOINT, MIDPRICE, TYPPRICE, WCLPRICE
> │       │   └── pattern_recognition.py  # 60+ candlestick patterns (CDL2CROWS, CDL3BLACKCROWS, CDLDOJI, CDLENGULFING, CDLHAMMER, CDLMARUBOZU, etc.)
> │       └── cpu/                   # CPU/Numba implementations
> ├── tests/                         # Test suite
> ├── pyproject.toml
> └── README.md
> ```
> 
> ### Current Indicator Signatures (from src/numta/api/overlap.py)
> ```python
> def SMA(close: Union[np.ndarray, list], timeperiod: int = 30) -> np.ndarray:
> def EMA(close: Union[np.ndarray, list], timeperiod: int = 30) -> np.ndarray:
> def BBANDS(close: Union[np.ndarray, list], timeperiod: int = 5, nbdevup: float = 2.0, nbdevdn: float = 2.0, matype: int = 0) -> tuple:
> def DEMA(close: Union[np.ndarray, list], timeperiod: int = 30) -> np.ndarray:
> def KAMA(close: Union[np.ndarray, list], timeperiod: int = 30) -> np.ndarray:
> def MA(close: Union[np.ndarray, list], timeperiod: int = 30, matype: int = 0) -> np.ndarray:
> def MAMA(close: Union[np.ndarray, list], fastlimit: float = 0.5, slowlimit: float = 0.05) -> tuple:
> def SAR(high: Union[np.ndarray, list], low: Union[np.ndarray, list], acceleration: float = 0.02, maximum: float = 0.2) -> np.ndarray:
> def SAREXT(high: Union[np.ndarray, list], low: Union[np.ndarray, list], ...) -> np.ndarray:
> def TEMA(data: Union[np.ndarray, list], timeperiod: int = 30) -> np.ndarray:
> def T3(data: Union[np.ndarray, list], timeperiod: int = 5, vfactor: float = 0.7) -> np.ndarray:
> def TRIMA(data: Union[np.ndarray, list], timeperiod: int = 30) -> np.ndarray:
> def WMA(data: Union[np.ndarray, list], timeperiod: int = 30) -> np.ndarray:
> ```
> 
> ### All Available Indicators (from src/numta/__init__.py)
> ```python
> # Overlap Studies
> SMA, EMA, DEMA, BBANDS, KAMA, MA, MAMA, SAR, SAREXT, T3, TEMA, TRIMA, WMA
> 
> # Momentum Indicators
> ADX, ADXR, APO, AROON, AROONOSC, ATR, BOP, CCI, CMO, DX, MACD, MACDEXT, MACDFIX, MFI, MINUS_DI, MINUS_DM, MOM, PLUS_DI, PLUS_DM, PPO, ROC, ROCP, ROCR, ROCR100, RSI, STOCH, STOCHF, STOCHRSI, TRIX, ULTOSC, WILLR
> 
> # Volume Indicators
> AD, OBV, ADOSC
> 
> # Volatility Indicators  
> NATR, TRANGE
> 
> # Cycle Indicators
> HT_DCPERIOD, HT_DCPHASE, HT_PHASOR, HT_SINE, HT_TRENDLINE, HT_TRENDMODE
> 
> # Statistic Functions
> BETA, CORREL, LINEARREG, LINEARREG_ANGLE, LINEARREG_INTERCEPT, LINEARREG_SLOPE, STDDEV, TSF, VAR
> 
> # Math Operators
> MAX, MAXINDEX, MIN, MININDEX, MINMAX, MINMAXINDEX, SUM
> 
> # Price Transform
> MEDPRICE, MIDPOINT, MIDPRICE, TYPPRICE, WCLPRICE
> 
> # Pattern Recognition (60+ patterns)
> CDL2CROWS, CDL3BLACKCROWS, CDL3INSIDE, CDL3OUTSIDE, CDL3STARSINSOUTH, CDL3WHITESOLDIERS,
> CDLABANDONEDBABY, CDLADVANCEBLOCK, CDLBELTHOLD, CDLBREAKAWAY, CDLCLOSINGMARUBOZU, 
> CDLCONCEALBABYSWALL, CDLCOUNTERATTACK, CDLDARKCLOUDCOVER, CDLDOJI, CDLDOJISTAR,
> CDLDRAGONFLYDOJI, CDLENGULFING, CDLEVENINGDOJISTAR, CDLEVENINGSTAR, CDLGAPSIDESIDEWHITE, 
> CDLGRAVESTONEDOJI, CDLHAMMER, CDLHANGINGMAN, CDLHARAMI, CDLHARAMICROSS, CDLHIGHWAVE,
> CDLHIKKAKE, CDLHIKKAKEMOD, CDLHOMINGPIGEON, CDLIDENTICAL3CROWS, CDLINNECK,
> CDLINVERTEDHAMMER, CDLKICKING, CDLKICKINGBYLENGTH, CDLLADDERBOTTOM, CDLLONGLEGGEDDOJI, 
> CDLLONGLINE, CDLMARUBOZU, CDLMATCHINGLOW, CDLMATHOLD, CDLMORNINGDOJISTAR, CDLMORNINGSTAR, 
> CDLONNECK, CDLPIERCING, CDLRICKSHAWMAN, CDLRISEFALL3METHODS, CDLSEPARATINGLINES, 
> CDLSHOOTINGSTAR, CDLSHORTLINE, CDLSPINNINGTOP, CDLSTALLEDPATTERN, CDLSTICKSANDWICH, 
> CDLTAKURI, CDLTASUKIGAP, CDLTHRUSTING, CDLTRISTAR, CDLUNIQUE3RIVER, CDLUPSIDEGAP2CROWS, 
> CDLXSIDEGAP3METHODS
> ```
> 
> ---
> 
> ## Specifications & Deliverables
> 
> ### 1. Create New Module: `src/numta/pandas_ext.py`
> 
> Create a pandas DataFrame extension accessor using `@pd.api.extensions.register_dataframe_accessor("ta")`.
> 
> ### 2. Extension API Design
> 
> ```python
> import pandas as pd
> import numta  # This should auto-register the .ta accessor
> 
> df = pd.read_csv("prices.csv")  # DataFrame with columns: open, high, low, close, volume
> 
> # Calculate and return Series (default behavior)
> sma_series = df.ta.sma(timeperiod=20)
> 
> # Calculate and append to DataFrame
> df.ta.sma(timeperiod=20, append=True)  # Adds column 'SMA_20'
> 
> # Multiple indicators
> df.ta.ema(timeperiod=12, append=True)   # Adds 'EMA_12'
> df.ta.macd(append=True)                  # Adds 'MACD_12_26_9', 'MACDSignal_12_26_9', 'MACDHist_12_26_9'
> df.ta.bbands(timeperiod=20, append=True) # Adds 'BBU_20_2.0', 'BBM_20', 'BBL_20_2.0'
> df.ta.rsi(timeperiod=14, append=True)    # Adds 'RSI_14'
> 
> # Candlestick patterns
> df.ta.cdldoji(append=True)               # Adds 'CDLDOJI'
> df.ta.cdlengulfing(append=True)          # Adds 'CDLENGULFING'
> ```
> 
> ### 3. Column Naming Convention
> 
> Follow TA-Lib and pandas-ta conventions:
> - Single output: `{INDICATOR}_{param1}_{param2}` e.g., `SMA_20`, `EMA_12`, `RSI_14`
> - Multiple outputs (BBANDS): `BBU_{period}_{stddev}`, `BBM_{period}`, `BBL_{period}_{stddev}`
> - Multiple outputs (MACD): `MACD_{fast}_{slow}_{signal}`, `MACDSignal_{fast}_{slow}_{signal}`, `MACDHist_{fast}_{slow}_{signal}`
> - Pattern recognition: `{PATTERN_NAME}` e.g., `CDLDOJI`, `CDLENGULFING`
> 
> ### 4. Auto-detect OHLCV Columns
> 
> The accessor should automatically detect standard column names (case-insensitive):
> - `open`, `Open`, `OPEN`, `o`
> - `high`, `High`, `HIGH`, `h`
> - `low`, `Low`, `LOW`, `l`
> - `close`, `Close`, `CLOSE`, `c`
> - `volume`, `Volume`, `VOLUME`, `v`
> 
> If columns cannot be auto-detected, allow explicit specification:
> ```python
> df.ta.sma(column='adj_close', timeperiod=20, append=True)
> ```
> 
> ### 5. Implementation Requirements
> 
> #### TAAccessor Class Structure
> ```python
> @pd.api.extensions.register_dataframe_accessor("ta")
> class TAAccessor:
>     def __init__(self, pandas_obj: pd.DataFrame):
>         self._obj = pandas_obj
>         self._validate()
>         self._detect_ohlcv_columns()
>     
>     def _validate(self):
>         # Ensure it's a valid DataFrame
>         pass
>     
>     def _detect_ohlcv_columns(self):
>         # Auto-detect open, high, low, close, volume columns
>         pass
>     
>     def _get_column(self, name: str, column: Optional[str] = None) -> np.ndarray:
>         # Get column data as numpy array
>         pass
>     
>     def _append_or_return(self, result, column_names, append: bool):
>         # Either append to DataFrame or return Series/DataFrame
>         pass
>     
>     # Overlap Studies
>     def sma(self, timeperiod: int = 30, column: str = None, append: bool = False): ...
>     def ema(self, timeperiod: int = 30, column: str = None, append: bool = False): ...
>     def bbands(self, timeperiod: int = 5, nbdevup: float = 2.0, nbdevdn: float = 2.0, matype: int = 0, column: str = None, append: bool = False): ...
>     # ... etc for all indicators
>     
>     # Pattern Recognition
>     def cdldoji(self, append: bool = False): ...
>     def cdlengulfing(self, append: bool = False): ...
>     # ... etc for all 60+ patterns
> ```
> 
> ### 6. Update `src/numta/__init__.py`
> 
> Add import to auto-register the accessor:
> ```python
> # At the end of __init__.py
> # Register pandas DataFrame extension
> from . import pandas_ext  # This registers the .ta accessor
> ```
> 
> ### 7. Add pandas as Optional Dependency
> 
> Update `pyproject.toml`:
> ```toml
> [project.optional-dependencies]
> pandas = [
>     "pandas>=1.3.0",
> ]
> # Update dev to include pandas
> dev = [
>     "pytest>=7.0.0",
>     "pytest-benchmark>=4.0.0",
>     "pandas>=1.3.0",
> ]
> ```
> 
> ### 8. Tests
> 
> Create `tests/test_pandas_ext.py` with comprehensive tests:
> - Test all indicator methods work correctly
> - Test `append=True` behavior
> - Test column naming conventions
> - Test auto-detection of OHLCV columns
> - Test custom column specification
> - Test edge cases (empty DataFrame, missing columns, etc.)
> - Compare results with direct numta function calls
> 
> ### 9. Documentation
> 
> Update `README.md` with:
> - New "Pandas Integration" section
> - Usage examples
> - Column naming conventions
> - Migration guide from pandas-ta
> 
> ---
> 
> ## Acceptance Criteria
> 
> 1. **All indicators accessible via `.ta` accessor** - Every indicator in numta (136 total) should be callable via the DataFrame accessor
> 2. **Correct results** - Results must match direct numta function calls exactly
> 3. **Column naming** - Follow TA-Lib/pandas-ta conventions for automatic column naming
> 4. **append parameter** - `append=True` appends to DataFrame, `append=False` (default) returns Series/DataFrame
> 5. **Auto-detection** - Automatically detect OHLCV columns with case-insensitive matching
> 6. **Custom columns** - Allow explicit column specification via `column=` parameter
> 7. **Tests passing** - Comprehensive test coverage for all accessor methods
> 8. **Documentation** - README updated with usage examples
> 
> ---
> 
> ## References
> - [pandas-ta DataFrame accessor pattern](https://github.com/aarigs/pandas-ta)
> - [Pandas DataFrame extension guide](https://pandas.pydata.org/docs/dev/reference/api/pandas.api.extensions.register_dataframe_accessor.html)
> - [TA-Lib API docs](https://ta-lib.org/doc_api/index.html)
> - numta indicator lists in `FUNCTION_IMPLEMENTATIONS.md` and `src/numta/__init__.py`
> 

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 We'd love your input! Share your thoughts on Copilot coding agent in our [2 minute survey](https://gh.io/copilot-coding-agent-survey).

Copilot AI and others added 2 commits November 30, 2025 23:07
Co-authored-by: deepentropy <8287111+deepentropy@users.noreply.github.com>
…t pandas installation

Co-authored-by: deepentropy <8287111+deepentropy@users.noreply.github.com>
Copilot AI changed the title [WIP] Implement pandas DataFrame extension accessor for technical indicators Add pandas DataFrame extension accessor (.ta) for technical indicators Nov 30, 2025
Copilot AI requested a review from deepentropy November 30, 2025 23:13
@deepentropy deepentropy marked this pull request as ready for review November 30, 2025 23:22
Copilot AI review requested due to automatic review settings November 30, 2025 23:22
@chatgpt-codex-connector
Copy link

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.
To continue using code reviews, you can upgrade your account or add credits to your account and enable them for code reviews in your settings.

@deepentropy deepentropy merged commit 95300f6 into main Nov 30, 2025
14 checks passed
@deepentropy deepentropy deleted the copilot/add-ta-accessor-to-dataframe branch November 30, 2025 23:22
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request implements a pandas DataFrame extension accessor (.ta) that provides seamless integration for calculating and appending 136+ technical indicators directly to DataFrames, mirroring the pandas-ta API pattern.

Key Changes:

  • New pandas accessor enables df.ta.sma() style indicator calculations
  • Automatic case-insensitive OHLCV column detection supporting multiple naming conventions
  • Flexible append=True/False parameter to either modify DataFrames in-place or return Series/DataFrame results

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
src/numta/pandas_ext.py Implements TAAccessor class with 136+ indicator methods, column auto-detection, and TA-Lib-compatible naming conventions
src/numta/__init__.py Auto-registers the .ta accessor by importing pandas_ext module
pyproject.toml Adds pandas as optional dependency under [pandas] extra and creates [all] extra group
tests/test_pandas_ext.py Comprehensive test suite covering all indicator categories, edge cases, and API behavior
README.md Adds "Pandas Integration" documentation section with usage examples and migration guide

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review for a chance to win a $100 gift card. Take the survey.


### Migration from pandas-ta

If you're migrating from pandas-ta (abandoned since 2019), numta provides a compatible API:
Copy link

Copilot AI Nov 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The claim that pandas-ta was "abandoned since 2019" appears to be inaccurate. The pandas-ta library (twopirllc/pandas-ta on GitHub) has had active development beyond 2019, with the last release being in 2022. Consider revising this to simply state the migration benefits without making potentially inaccurate claims about the other library's maintenance status.

Suggested change
If you're migrating from pandas-ta (abandoned since 2019), numta provides a compatible API:
If you're migrating from pandas-ta, numta provides a compatible API:

Copilot uses AI. Check for mistakes.
"numba>=0.56.0", # JIT compilation for faster execution
]
all = [
"pandas>=1.3.0",
Copy link

Copilot AI Nov 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The "all" optional dependency group appears incomplete. It should include all optional dependencies (comparison, numba) or be more clearly named to indicate it only includes specific extras. Consider either:

  1. Renaming to "dev-full" or similar to clarify scope
  2. Including all optional dependencies: pandas, numba, comparison, pytest packages
Suggested change
"pandas>=1.3.0",
"pandas>=1.3.0",
"numba>=0.56.0",
"TA-Lib>=0.4.0",

Copilot uses AI. Check for mistakes.
Comment on lines +132 to +134
if not isinstance(result, tuple):
result = (result,)

Copy link

Copilot AI Nov 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] Potential logic issue in _append_or_return: When column_names is a string, line 130 converts result to a tuple if it's not already one. However, lines 132-133 do the same check again. This creates redundant logic. While it shouldn't cause incorrect behavior (since the second check will see result is already a tuple), the code could be simplified by removing lines 132-133 since the tuple conversion is already handled in line 130 when needed.

Suggested change
if not isinstance(result, tuple):
result = (result,)

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants