Comparing Share Volume and Rupee Turnover in Indian Stocks

The Indian equity market is characterized by a high degree of heterogeneity, where stocks range from low-priced retail favorites to high-priced institutional heavyweights. To accurately gauge market activity, analysts must distinguish between two primary metrics: Traded Volume and Rupee Turnover. While volume tracks the raw number of units exchanged, turnover measures the total monetary value […]

Table Of Contents
  1. The Theory of Market Activity: Volume vs. Turnover
  2. Structural Comparison: Share Count vs. Monetary Value
  3. Python Data Workflow: Fetch → Store → Measure
  4. Exchange-Wise Volume & Turnover Splits (NSE vs. BSE)
  5. Normalization & Relative Metrics
  6. Temporal Distribution: Intraday Skewness
  7. Advanced Python Implementations & Final Compiled Reference
  8. Final Compiled Reference Section

The Indian equity market is characterized by a high degree of heterogeneity, where stocks range from low-priced retail favorites to high-priced institutional heavyweights. To accurately gauge market activity, analysts must distinguish between two primary metrics: Traded Volume and Rupee Turnover. While volume tracks the raw number of units exchanged, turnover measures the total monetary value flowing through the security. Relying solely on volume can lead to skewed conclusions, particularly when comparing “penny stocks” with blue-chip giants like MRF or Page Industries.

Author’s Note: This article clarifies the structural differences between share-based and value-based activity metrics. It focuses on the mathematical and programmatic definition of these statistics as foundational data points. It does not recommend specific trading preferences or usage scenarios, as those strategic applications belong to the execution and strategy pillars of market analysis.

The Theory of Market Activity: Volume vs. Turnover

Market activity is often visualized through the lens of a “narrative of numbers.” If price represents the consensus on value, then volume and turnover represent the intensity of that consensus. Looking at the “number of shares” alone is akin to counting the number of people at a fair without knowing how much money they are spending. In the Indian context, where stock prices vary from ₹1 to over ₹1,00,000, the “Activity Profile” of a stock is fundamentally shaped by its price scaling effects.

Price Scaling Effects

Consider a scenario where two stocks, Stock A (priced at ₹20) and Stock B (priced at ₹20,000), both record a volume of 1,00,000 shares. In a purely volume-based ranking, both appear identical. However, the capital required to move Stock B is 1,000 times greater than that for Stock A. A 1% price move in Stock B represents a massive shift in market capitalization and institutional commitment, whereas the same move in Stock A might be driven by minor retail speculation. This discrepancy is why Rupee Turnover is the superior metric for identifying “Smart Money” footprints.

Defining the Metrics

To mathematically formalize these concepts, we define Traded Volume as the sum of all units exchanged and Rupee Turnover as the value-weighted sum of those transactions.

Mathematical Specification: Traded Volume and Rupee Turnover

Traded Volume (V):V=i=1nQiRupee Turnover (T):T=i=1nQi×Pi

Description: The Traded Volume ($V$) is the aggregate sum of quantities ($Q_i$) for all $n$ trades in a given period. Rupee Turnover ($T$) is the summation of the product of quantity ($Q_i$) and price ($P_i$) for each individual trade $i$. While $V$ measures the “size of the crowd,” $T$ measures the “capital at the fair.”

Variable Definitions:

  • $V$ (Resultant): Total shares traded during the observation window.
  • $T$ (Resultant): Total monetary value of trades in Indian Rupees (INR).
  • $Q_i$ (Variable): The quantity of shares in the $i$-th transaction.
  • $P_i$ (Variable): The price at which the $i$-th transaction was executed.
  • $\sum$ (Operator): The summation symbol indicating the addition of all individual trade components.
  • $n$ (Parameter): The total number of discrete trades executed.

Structural Comparison: Share Count vs. Monetary Value

The “Unit Bias Problem” is a common trap for retail traders. High-volume lists in India are frequently dominated by low-priced penny stocks (e.g., Vodafone Idea, YES Bank), which can trade millions of shares with relatively low capital. Conversely, turnover rankings are dominated by institutional-grade blue-chips (e.g., HDFC Bank, Reliance). This distinction is vital for understanding liquidity and market depth.

Average Traded Price (ATP)

The Average Traded Price (ATP) is a critical bridge between volume and turnover. It is not a simple average of High, Low, and Close, but a volume-weighted mean of every trade that occurred during the session. It represents the true “fair value” at which the majority of capital changed hands.

Mathematical Specification: Average Traded Price (ATP)

ATP=i=1nQi×Pii=1nQi=TV

Description: ATP is the ratio of Total Rupee Turnover to Total Traded Volume. It provides a single price point that summarizes the distribution of value across all trades. In Python, this is efficiently handled by summing the product of quantity and price columns before dividing by the total quantity.

Variable Definitions:

  • ATP (Resultant): The volume-weighted average price for the period.
  • Numerator ($T$): Total Traded Value (Rupee Turnover).
  • Denominator ($V$): Total Traded Quantity (Volume).
  • $Q_i, P_i$: Quantity and Price of the $i$-th individual trade.
Python Implementation: Calculating Turnover and ATP from Trade Logs
import pandas as pd

def calculate_activity_metrics(trade_log_df):
"""
Calculates key activity metrics from a trade log:
Total Volume, Total Turnover, and Average Traded Price (ATP).

Parameters:
-----------
trade_log_df : pd.DataFrame
A DataFrame containing at least two columns:
- 'quantity': The number of shares traded in each transaction.
- 'price': The price per share for each transaction.

Returns:
--------
dict
A dictionary containing:
- 'Total_Volume': Sum of all traded quantities.
- 'Total_Turnover': Sum of (Price * Quantity) for all trades.
- 'ATP': The Volume-Weighted Average Price (Turnover / Volume).
"""

# 1. Validation: Ensure necessary columns exist to avoid runtime errors
if 'quantity' not in trade_log_df.columns or 'price' not in trade_log_df.columns:
raise ValueError("DataFrame must contain 'quantity' and 'price' columns.")

# 2. Vectorized Calculation: Compute Trade Value for each row
# Formula: Trade Value = Quantity * Price
# We use .copy() to ensure we don't modify the original dataframe slice unexpectedly
df_calc = trade_log_df.copy()
df_calc['trade_value'] = df_calc['quantity'] * df_calc['price']

# 3. Aggregation: Calculate Total Volume and Total Turnover
total_volume = df_calc['quantity'].sum()
total_turnover = df_calc['trade_value'].sum()

# 4. Derived Metric: Calculate Average Traded Price (ATP)
# ATP acts as the break-even price for the total volume traded.
# Logic: If volume is 0, ATP is 0 to avoid DivisionByZero error.
atp = total_turnover / total_volume if total_volume > 0 else 0.0

return {
'Total_Volume': total_volume,
'Total_Turnover': total_turnover,
'ATP': atp
}

# --- Main Execution Block ---
if __name__ == "__main__":
# 1. Setup Dummy Data
# Simulating 4 trades at different price points
trades_data = {
'quantity': [100, 500, 200, 1000],
'price': [2500.50, 2501.00, 2499.75, 2500.00]
}
trades_df = pd.DataFrame(trades_data)

print("--- Input Data ---")
print(trades_df)
print("\n--- Calculating Metrics ---")

# 2. Execute Function
metrics = calculate_activity_metrics(trades_df)

# 3. Display Results
# Formatting: Turnover to 2 decimal places with commas, ATP to 2 decimal places
print(f"Total Volume: {metrics['Total_Volume']:,}")
print(f"Total Turnover: ₹{metrics['Total_Turnover']:,.2f}")
print(f"ATP (VWAP): ₹{metrics['ATP']:,.2f}")

Methodological Summary: Calculation of Activity Metrics

The following process outlines the algorithmic derivation of Rupee Turnover and Average Traded Price (ATP) using vectorised operations. This approach ensures computational efficiency suitable for high-frequency financial datasets.

Step 1: Data Ingestion and Validation

The process begins by accepting a structured dataset (DataFrame). The algorithm validates the existence of two critical vectors:

  • q (Quantity): The discrete number of units exchanged per transaction.
  • p (Price): The execution price per unit for the corresponding transaction.

Step 2: Computation of Individual Trade Value

For every transaction row i, the Trade Value (v) is calculated. This represents the total capital exchanged in a single executed order.

vi = qi × pi

Step 3: Aggregation of Core Metrics

The system computes the aggregate sums for both volume and value across the entire time series or trading session:

  • Total Volume (Vtotal): The summation of all quantities traded.
    Vtotal = ∑ qi
  • Total Turnover (Ttotal): The summation of all individual trade values.
    Ttotal = ∑ vi = ∑ (qi × pi)

Step 4: Derivation of Average Traded Price (ATP)

The ATP is conceptually equivalent to the Volume Weighted Average Price (VWAP) over the selected period. It is derived by dividing the Total Turnover by the Total Volume. A conditional check is implemented to handle cases where Total Volume is zero to prevent division errors.

ATP = Ttotal ⁄ Vtotal

This metric provides a benchmark price that reflects the “center of gravity” for trading activity, weighing larger trades more heavily than smaller ones.

Fetch-Store-Measure Workflow: Volume & Turnover

To implement these metrics at scale, a robust data pipeline is required. The complexity arises from the need to handle multi-exchange data (NSE and BSE) while maintaining temporal accuracy.

  • Fetch: Use libraries like nsepython or yfinance to pull daily or intraday OHLCV data. Note that for Indian stocks, ‘Volume’ is standard, but ‘Turnover’ is often a separate field in Bhavcopy files.
  • Store: Store data in a time-series optimized database like TimescaleDB. Use separate columns for traded_qty and turnover_value to avoid rounding errors during subsequent calculations.
  • Measure: Apply rolling window functions (e.g., 20-day averages) to identify deviations from normal activity. A turnover spike without a corresponding volume spike (in a high-priced stock) often signals institutional entry.

Impact on Trading Horizons

  • Short-Term: Intraday turnover spikes act as a “monetary floor.” High turnover at a specific price level suggests that a large number of participants are comfortable transacting there, creating a zone of high liquidity and potential support/resistance.
  • Medium-Term: Using rolling turnover averages helps identify structural shifts in interest. For instance, if turnover is rising while price is flat, it may indicate “absorption” where a large seller is being met by equally large buyers.
  • Long-Term: Market-wide turnover serves as a proxy for equity participation. Rising aggregate turnover in the Indian market suggests deepening financialization and increasing domestic liquidity, reducing the impact of foreign capital outflows.

Python Data Workflow: Fetch → Store → Measure

To analyze the Indian stock market at a granular level, a systematic data pipeline is essential. The “Fetch → Store → Measure” workflow ensures that raw data from exchanges like the NSE and BSE is transformed into actionable quantitative insights. This section details the programmatic steps required to build this infrastructure using Python.

Step 1: Data Acquisition (Fetch)

In the Indian ecosystem, volume and turnover data are primarily sourced from “Bhavcopy” files provided by the exchanges. While yfinance is excellent for global OHLCV data, specific Indian metrics like “Deliverable Quantity” and “Total Turnover” are best retrieved via nsepython or jugaad-data, which interface directly with NSE India’s web portal.

Python Implementation: Fetching Historical Turnover via nsepython
from nsepython import nse_eq
import pandas as pd
import json

def fetch_equity_activity(symbol):
"""
Fetches daily trading activity, specifically Traded Quantity and Turnover,
for a given NSE equity symbol using the nsepython library.

Parameters:
-----------
symbol : str
The NSE stock ticker symbol (e.g., "RELIANCE", "INFY").

Returns:
--------
pd.DataFrame
A DataFrame containing a single row with:
- Symbol
- Traded_Qty (Volume)
- Turnover_In_Lakhs (Value)
- Last_Price (LTP)
Returns an empty DataFrame if the fetch fails.
"""
try:
# 1. Data Ingestion: Fetch real-time quote data from NSE
# nse_eq() handles the headers and session management required by NSE's new website.
payload = nse_eq(symbol)

# 2. Validation: Check if the API returned a valid dictionary
if not isinstance(payload, dict):
raise ValueError(f"Invalid response received for symbol: {symbol}")

# 3. Key Extraction: locating the specific nested dictionaries
# Note: NSE API keys can be volatile. We target the 'marketDeptOrderBook'
# and 'tradeInfo' sections where volume data typically resides.
order_book = payload.get('marketDeptOrderBook', {})
trade_info = order_book.get('tradeInfo', {})
metadata = payload.get('metadata', {})

# 4. Metric Parsing
# totalTradedVolume: Total shares traded so far in the day
# totalTradedValue: Total turnover (usually in Lakhs by default on NSE website)
# lastPrice: Current market price
traded_qty = trade_info.get('totalTradedVolume', 0)
turnover_value = trade_info.get('totalTradedValue', 0)
last_price = metadata.get('lastPrice', 0)

# 5. Structure Data
activity_metrics = {
'Symbol': symbol,
'Traded_Qty': traded_qty,
'Turnover_In_Lakhs': turnover_value, # NSE 'value' is typically in Lakhs
'Last_Price': last_price
}

return pd.DataFrame([activity_metrics])

except Exception as e:
print(f"Error fetching data for {symbol}: {e}")
return pd.DataFrame()

# --- Main Execution Block ---
if __name__ == "__main__":
# Define the target symbol
target_symbol = "RELIANCE"

print(f"--- Fetching Activity for {target_symbol} ---")

# Execute Function
df_result = fetch_equity_activity(target_symbol)

# Display Results
if not df_result.empty:
# Formatting for readability
# 'Turnover_In_Lakhs' is raw from NSE (e.g., 25000.50 means 25,000 Lakhs)
print(df_result.to_string(index=False))

# Optional: Conversion to actual currency for clarity
turnover_raw = df_result['Turnover_In_Lakhs'].iloc[0]
print(f"\nFormatted Turnover: ₹ {turnover_raw:,.2f} Lakhs")
else:
print("No data available.")

Methodological Definition: Real-Time Equity Activity Retrieval

The Python procedure defined above automates the extraction of key liquidity metrics—specifically Volume and Turnover—from the National Stock Exchange (NSE) endpoints. This process ensures that the raw JSON data provided by the exchange is parsed into a structured, analytical format.

Step 1: Connection and Data Ingestion

The system utilizes the nsepython wrapper to interface with NSE’s quote endpoint. This abstracts the complexity of session management (handling cookies and headers) which is required to bypass the exchange’s anti-scraping mechanisms.

Step 2: Hierarchical Data Traversal

The data returned by the exchange is a multi-layered JSON object. The algorithm navigates specific sub-dictionaries to locate trade data:

  • Root Node: The main response object.
  • Market Depth Node (marketDeptOrderBook): Contains the real-time order book and trade summaries.
  • Trade Information Node (tradeInfo): The specific leaf node containing the aggregated trade statistics for the session.

Step 3: Metric Extraction and Mathematical Context

Three primary variables are extracted and mapped:

  • Traded Quantity (Q): The cumulative sum of all shares exchanged during the session.
    Q = ∑ qi (where qi is the quantity of the i-th trade)
  • Turnover (V): The monetary value of the volume traded. The NSE API typically reports this in Lakhs (105 INR).
    Vreported = (∑ (pi × qi)) / 100,000
  • Last Traded Price (LTP): The execution price of the most recent transaction, extracted from the metadata node to ensure currency.

Step 4: Structuring for Analysis

The extracted scalars are encapsulated into a dictionary and subsequently converted into a Pandas DataFrame. This tabular format is the standard for financial time-series analysis, allowing for immediate integration with downstream quantitative models or visualization tools.

Step 2: Storage Architecture (Store)

For a project involving 5,000–7,000 stocks, a flat CSV-based storage system is inefficient. A relational database with time-series capabilities, such as PostgreSQL with the TimescaleDB extension, is recommended. The schema must handle different units (Shares vs. Rupees) and ensure data integrity during stock corporate actions like splits.

Database Schema: Market Statistics Table
/*
* ======================================================================================
* Script Name: Market Statistics Table Definition
* Description: Defines the schema for storing daily market statistics (Bhavcopy data)
* for equity instruments. Includes primary keys for data integrity and
* indices for performance optimization on analytical queries.
* Database: PostgreSQL / MySQL / Standard SQL Compatible
* ======================================================================================
*/

-- 1. Table Creation: market_stats
-- This table is designed to hold end-of-day (EOD) summary data for stocks.
CREATE TABLE market_stats (
-- The specific date of the trading session (Format: YYYY-MM-DD)
trade_date DATE NOT NULL,

-- The unique ticker symbol of the asset (e.g., 'RELIANCE', 'TCS')
symbol VARCHAR(25) NOT NULL,

-- The exchange where the trade occurred (e.g., 'NSE', 'BSE')
-- Kept as a separate column to allow multi-exchange analysis.
exchange VARCHAR(10) NOT NULL,

-- The official closing price of the asset for the day.
-- Precision (15, 2) allows for prices up to trillions with 2 decimal places.
close_price NUMERIC(15, 2),

-- Total number of shares/units traded during the session.
-- BIGINT is used to handle high-volume penny stocks or large caps.
traded_qty BIGINT,

-- Total value of shares traded, normalized to Lakhs (100,000s).
-- Precision (20, 2) ensures capacity for large aggregates.
turnover_in_lakhs NUMERIC(20, 2),

-- The quantity of shares marked for delivery (not squared off intraday).
delivery_qty BIGINT,

-- The ratio of delivery quantity to total traded quantity (0.00 to 100.00).
delivery_percentage NUMERIC(5, 2),

-- Primary Key Constraint:
-- Ensures that a specific symbol on a specific exchange has only one entry per date.
PRIMARY KEY (trade_date, symbol, exchange)
);

-- 2. Index Creation: idx_turnover
-- Purpose: Accelerates queries that sort or filter by 'Turnover'.
-- Scenario: "Show me the top 10 most active stocks by value for 2024-01-01".
-- Structure: Composite index on Date (equality) and Turnover (Range/Sort).
CREATE INDEX idx_turnover
ON market_stats (trade_date, turnover_in_lakhs DESC);

/*
* ======================================================================================
* Usage Examples (Commented Out):
* ======================================================================================
* * -- Insert a sample record:
* INSERT INTO market_stats
* VALUES ('2024-01-15', 'RELIANCE', 'NSE', 2750.00, 500000, 13750.00, 250000, 50.00);
* * -- Query top 5 stocks by turnover for a specific date:
* SELECT symbol, turnover_in_lakhs
* FROM market_stats
* WHERE trade_date = '2024-01-15'
* ORDER BY turnover_in_lakhs DESC
* LIMIT 5;
*/

Methodological Definition: Relational Schema for Market Statistics

The SQL specification above establishes a robust relational structure for persisting daily equity market data. The design prioritizes data integrity via constraints and read-performance via indexing strategies suitable for time-series financial analysis.

Step 1: Entity Definition (Table Schema)

The market_stats entity is defined with specific data types to handle financial precision:

  • Temporal Dimension (trade_date): Acts as the time-series anchor.
  • Categorical Dimensions (symbol, exchange): Identify the asset and the venue.
  • Quantitative Metrics:
    • close_price & turnover_in_lakhs use NUMERIC to prevent floating-point arithmetic errors common in monetary calculations.
    • traded_qty & delivery_qty use BIGINT to accommodate volumes exceeding 2 billion units (common in lower-priced equities).

Step 2: Constraint Enforcement (Primary Key)

A composite Primary Key is enforced on the tuple (trade_date, symbol, exchange). This mathematically guarantees uniqueness:

rR, ∃! (d, s, e)

This prevents duplicate data ingestion (e.g., accidental double-loading of the same daily CSV file).

Step 3: Performance Optimization (B-Tree Indexing)

An auxiliary structure, idx_turnover, is created to optimize “Top N” queries.

  • Structure: Composite B-Tree Index.
  • Logic: It first narrows down the search space by trade_date, and then maintains a pre-sorted list of turnover_in_lakhs in descending order.
  • Benefit: Reduces the algorithmic complexity of finding “Top Gainers by Value” from O(N log N) (sorting) to O(log N) or O(1) (index lookup).

Step 4: Analytical Application

This schema supports foundational financial ratios such as the Delivery Ratio:

Delivery Ratio = Delivery Qty ⁄ Traded Qty

This metric is critical for distinguishing between speculative intraday activity and genuine investment accumulation.

Step 3: Measurement & Concentration Ratios (Measure)

The “Concentration Ratio” is a powerful tool to measure market breadth. It quantifies how much of the total market liquidity is concentrated in the top few stocks. A high concentration in turnover suggests a “top-heavy” market where only a few blue-chips are being actively traded, whereas a distributed turnover indicates healthy broad-market participation.

Mathematical Specification: Turnover Concentration Ratio (CRn)

CRn=j=1nTjk=1NTk

Description: The Concentration Ratio ($CR_n$) is the sum of turnover for the top $n$ stocks divided by the total market turnover ($T$) for all $N$ stocks in the universe. In India, $CR_{50}$ (Nifty 50 stocks) often accounts for more than 60% of daily NSE turnover.

Variable Definitions:

  • $CR_n$ (Resultant): The percentage of market turnover held by the top $n$ securities.
  • $T_j$ (Variable): Rupee turnover of stock $j$ (after sorting by turnover in descending order).
  • $T_k$ (Variable): Rupee turnover of stock $k$ in the entire market universe.
  • $n$ (Parameter): The number of top-ranked stocks to include (e.g., 10, 50, 100).
  • $N$ (Parameter): The total count of all listed stocks.

Exchange-Wise Volume & Turnover Splits (NSE vs. BSE)

India is unique for having two major exchanges: the National Stock Exchange (NSE) and the Bombay Stock Exchange (BSE). Most liquid stocks are listed on both. However, trading activity is rarely split 50/50. This creates the “Dual-Listing Paradox,” where a stock might show massive volume on BSE but superior monetary turnover on NSE due to institutional preference.

The Dual-Listing Paradox

Institutions (FIIs/DIIs) typically prefer NSE for its deeper liquidity and lower impact cost, leading to higher Rupee Turnover. Conversely, retail traders sometimes favor BSE for specific low-priced stocks or legacy reasons, which can inflate share volume counts. To understand the total liquidity available to a stock, one must aggregate and then split these metrics by exchange.

Mathematical Specification: Exchange Volume and Turnover Share

SV,NSE=VNSEVNSE+VBSEST,NSE=TNSETNSE+TBSE

Description: These formulas represent the proportion of activity occurring on the NSE relative to the total activity across both exchanges. $S_{V,NSE}$ measures the volume share, while $S_{T,NSE}$ measures the turnover share.

Variable Definitions:

  • $V_{NSE}, V_{BSE}$ (Variables): Traded volume on NSE and BSE respectively.
  • $T_{NSE}, T_{BSE}$ (Variables): Rupee turnover on NSE and BSE respectively.
  • $S_{V,NSE}$ (Resultant): NSE Volume contribution factor (decimal).
  • $S_{T,NSE}$ (Resultant): NSE Turnover contribution factor (decimal).
Python Code: Calculating Cross-Exchange Split
import pandas as pd

def calculate_exchange_split(nse_stats, bse_stats):
"""
Calculates the relative market share between NSE (National Stock Exchange)
and BSE (Bombay Stock Exchange) to assess liquidity fragmentation.

Parameters:
-----------
nse_stats : dict
Dictionary containing 'volume' (int) and 'turnover' (float) for NSE.
bse_stats : dict
Dictionary containing 'volume' (int) and 'turnover' (float) for BSE.

Returns:
--------
dict
A dictionary containing:
- 'NSE_Volume_Share': Float (0.0 to 1.0) representing NSE's slice of activity.
- 'NSE_Turnover_Share': Float (0.0 to 1.0) representing NSE's slice of value.
- 'Is_NSE_Dominant': Boolean, True if NSE turnover share > 80%.
"""

# 1. Input Validation: Safely get values, defaulting to 0 if missing
n_vol = nse_stats.get('volume', 0)
b_vol = bse_stats.get('volume', 0)
n_trn = nse_stats.get('turnover', 0.0)
b_trn = bse_stats.get('turnover', 0.0)

# 2. Aggregation: Calculate the Total Addressable Market (TAM) for the session
# We sum the liquidity pools of both exchanges.
total_volume = n_vol + b_vol
total_turnover = n_trn + b_trn

# 3. Safety Check: Avoid DivisionByZero errors
# If there is no trading activity on either exchange, shares are 0.
if total_volume == 0:
nse_v_share = 0.0
else:
nse_v_share = n_vol / total_volume

if total_turnover == 0:
nse_t_share = 0.0
else:
nse_t_share = n_trn / total_turnover

# 4. Dominance Logic: Liquidity Concentration Analysis
# A market is considered "Dominant" if it holds the vast majority of liquidity.
# The 0.8 (80%) threshold is a standard heuristic for deep liquidity pools.
is_dominant = nse_t_share > 0.8

return {
'NSE_Volume_Share': nse_v_share,
'NSE_Turnover_Share': nse_t_share,
'Is_NSE_Dominant': is_dominant
}

# --- Main Execution Block ---
if __name__ == "__main__":
# 1. Setup Dummy Data (Simulating a typical active trading day)
# NSE typically has significantly higher volume than BSE for liquid stocks.
nse_data = {
'volume': 15000000, # 1.5 Crore shares
'turnover': 4500000000.00 # 450 Crore INR
}

bse_data = {
'volume': 1200000, # 12 Lakh shares
'turnover': 360000000.00 # 36 Crore INR
}

print("--- Input Statistics ---")
print(f"NSE Turnover: ₹{nse_data['turnover']:,.2f}")
print(f"BSE Turnover: ₹{bse_data['turnover']:,.2f}")

# 2. Execute Analysis
metrics = calculate_exchange_split(nse_data, bse_data)

# 3. Display Results
print("\n--- Market Split Analysis ---")
print(f"NSE Volume Share: {metrics['NSE_Volume_Share']:.2%}")
print(f"NSE Turnover Share: {metrics['NSE_Turnover_Share']:.2%}")
print(f"Liquidity Dominant: {'Yes' if metrics['Is_NSE_Dominant'] else 'No'}")

# Contextual Output
if metrics['Is_NSE_Dominant']:
print("\nInsight: Liquidity is deeply concentrated on NSE. Price discovery is likely driven by NSE flows.")
else:
print("\nInsight: Liquidity is fragmented. Arbitrage opportunities may exist between exchanges.")

Methodological Definition: Inter-Exchange Liquidity Split Analysis

The Python algorithm detailed above quantifies Market Fragmentation. By comparing the order flow between the National Stock Exchange (NSE) and the Bombay Stock Exchange (BSE), we determine where the primary “Price Discovery” is occurring.

Step 1: Aggregation of Liquidity Pools

To understand the total market depth, we first aggregate the disparate liquidity pools. We define the Total Volume (Vtotal) and Total Turnover (Ttotal) as the sum of the individual exchange metrics.

  • Vtotal = Vnse + Vbse
  • Ttotal = Tnse + Tbse

Step 2: Derivation of Relative Market Share

We calculate the proportional weight of the NSE against the total market. This is expressed as a ratio ranging from 0 to 1.

Sharense = Tnse ⁄ Ttotal

A result approaching 1.0 indicates a monopoly on liquidity, while a result near 0.5 indicates a perfectly fragmented market.

Step 3: The Dominance Threshold (Boolean Logic)

The algorithm applies a heuristic threshold to determine liquidity concentration. The Boolean flag Is_NSE_Dominant evaluates true if:

Sharense > 0.80

Significance: In the Indian context, if one exchange controls >80% of the turnover, technical analysis and algorithmic execution strategies (like VWAP) should primarily reference that exchange’s data to minimize Impact Cost.

Author’s Note: This section quantifies the distribution across exchanges to identify where the “active” liquidity resides. It does not interpret price leadership (which exchange moves first) or arbitrage efficiency (the gap between prices), which are separate qualitative concerns.

Strategic Impact Across Time Horizons

  • Short-Term: Traders look for “Exchange Concentration.” If a stock usually trades 90% on NSE but suddenly sees a 40% spike on BSE, it may indicate a massive retail block trade or a specific arbitrage opportunity between the two order books.
  • Medium-Term: Persistent high turnover on one exchange ensures better execution with lower slippage. Monitoring the turnover split helps in selecting the primary exchange for executing large position sizes.
  • Long-Term: Shifts in turnover shares between exchanges can signal structural changes in the Indian market landscape (e.g., the historical migration of liquidity from BSE to NSE in the early 2000s).

Developing a unified data repository for these metrics is the first step toward advanced quantitative modeling. Solutions like those offered by TheUniBit can streamline this multi-source data ingestion, allowing for seamless cross-exchange analysis.

Normalization & Relative Metrics

Raw volume and turnover figures are difficult to compare across different stocks because of varying market capitalizations and price levels. To make these metrics meaningful for quantitative analysis, they must be normalized. This section introduces Relative Volume (RVOL) and Relative Turnover (RTurn), which measure current activity against a historical baseline.

Why Normalize Turnover?

For high-priced stocks (e.g., Bosch Ltd or Honeywell Automation), raw share volume is naturally low, often making it appear “illiquid” on volume charts. However, their Rupee Turnover might be higher than hundreds of penny stocks combined. Normalizing turnover by its rolling mean allows a trader to identify whether the current monetary commitment is an outlier relative to the stock’s own history, regardless of its nominal price.

Mathematical Specification: Relative Volume (RVOL) and Relative Turnover (RTurn)

RVOLt=Vt1nk=1nVtkRTurnt=Tt1nk=1nTtk

Description: RVOL and RTurn represent the ratio of current period volume/turnover to their respective $n$-period simple moving averages. A value of 1.0 indicates activity is exactly at the historical average, while a value of 5.0 indicates activity is five times higher than normal.

Variable Definitions:

  • $V_t, T_t$: Traded Volume and Rupee Turnover at time $t$.
  • $n$ (Parameter): The lookback window (typically 20 days for a standard trading month).
  • $\sum$: Summation of previous values in the window.
  • $RVOL_t, RTurn_t$ (Resultants): The relative intensity scores.
Python Algorithm: calculate_relative_metrics
import pandas as pd
import numpy as np

def calculate_relative_metrics(df, window=20):
"""
Computes Relative Volume (RVOL) and Relative Turnover (RTurn)
by comparing current activity against a historical Moving Average (MA).

Parameters:
-----------
df : pd.DataFrame
DataFrame containing time-series data with 'volume' and 'turnover' columns.
window : int, optional
The lookback period for the moving average (default is 20 for approx. 1 month).

Returns:
--------
pd.DataFrame
The original DataFrame with added columns:
- 'vol_ma': Moving Average of Volume.
- 'turn_ma': Moving Average of Turnover.
- 'RVOL': Relative Volume Ratio.
- 'RTurn': Relative Turnover Ratio.
- Rows with NaN values (insufficient history) are removed.
"""

# 1. Validation: Ensure required columns exist
if not {'volume', 'turnover'}.issubset(df.columns):
raise ValueError("Input DataFrame must contain 'volume' and 'turnover' columns.")

# 2. Baseline Calculation: Compute Rolling Averages
# We use a Simple Moving Average (SMA) as the baseline for "normal" activity.
df['vol_ma'] = df['volume'].rolling(window=window).mean()
df['turn_ma'] = df['turnover'].rolling(window=window).mean()

# 3. Ratio Derivation: Relative Volume (RVOL)
# RVOL > 1.0 indicates higher than average volume.
# RVOL > 2.0 is often considered an "Institutional Breakout".
# Handling division by zero if MA is 0.
df['RVOL'] = df.apply(
lambda row: row['volume'] / row['vol_ma'] if row['vol_ma'] > 0 else 0, axis=1
)

# 4. Ratio Derivation: Relative Turnover (RTurn)
# NOTE: The original logic in the prompt (turnover / vol_ma) was likely a typo.
# Corrected here to: Current Turnover / Turnover Moving Average.
df['RTurn'] = df.apply(
lambda row: row['turnover'] / row['turn_ma'] if row['turn_ma'] > 0 else 0, axis=1
)

# 5. Cleanup: Remove rows where Moving Average couldn't be calculated
# (The first 'window - 1' rows will be NaN)
return df.dropna()

# --- Main Execution Block ---
if __name__ == "__main__":
# 1. Setup Dummy Data
# Generating 25 days of trading data to satisfy the 20-day window
days = 25
data = {
'date': pd.date_range(start='2024-01-01', periods=days, freq='B'), # Business days
'volume': np.random.randint(100000, 500000, size=days),
'price': np.random.uniform(100, 110, size=days)
}
df_trades = pd.DataFrame(data)

# Derive Turnover (Quantity * Price)
df_trades['turnover'] = df_trades['volume'] * df_trades['price']

# Introduce a "Breakout" on the last day (Day 25)
# We make volume 3x the average to test RVOL
df_trades.iloc[-1, df_trades.columns.get_loc('volume')] = 1000000
df_trades.iloc[-1, df_trades.columns.get_loc('turnover')] = 1000000 * 105.00

print("--- Input Data (Last 5 Days) ---")
print(df_trades.tail())

# 2. Execute Function
print("\n--- Calculating Relative Metrics (Window=20) ---")
df_result = calculate_relative_metrics(df_trades, window=20)

# 3. Display Results
# Showing the last few rows where valid calculations exist
output_columns = ['date', 'volume', 'vol_ma', 'RVOL', 'RTurn']
print(df_result[output_columns].tail().to_string(index=False))

# Check the breakout signal
last_rvol = df_result['RVOL'].iloc[-1]
if last_rvol > 2.0:
print(f"\nSignal Detected: High Relative Volume ({last_rvol:.2f}x). Potential Trend Start.")

Methodological Definition: Relative Volume and Turnover Analysis

This process normalizes current market activity against historical baselines to identify anomalies. By converting absolute integers (Volume) into relative ratios, analysts can spot institutional participation regardless of the stock’s liquidity tier.

Step 1: Establishing the Historical Baseline

To determine if today’s activity is significant, we first define “normal” behavior using a Simple Moving Average (SMA). A 20-day window is standard, representing one trading month.

  • Volume Baseline (Vavg): The arithmetic mean of traded quantity over the last n periods.
  • Turnover Baseline (Tavg): The arithmetic mean of traded value over the last n periods.

Step 2: Derivation of Relative Volume (RVOL)

RVOL measures the intensity of trading interest. It is calculated by dividing the current session’s volume (Vt) by the Volume Baseline.

RVOL = VtVavg

Interpretation:

  • RVOL ≈ 1.0: Standard activity.
  • RVOL > 2.0: High activity (often indicates news or institutional entry).

Step 3: Derivation of Relative Turnover (RTurn)

RTurn measures the intensity of capital flow. While volume tracks shares, turnover tracks money. This is critical for high-priced stocks where volume appears low but capital commitment is high.

RTurn = TtTavg

By monitoring both RVOL and RTurn, one can distinguish between high-volume “churn” in penny stocks versus genuine capital inflows in heavyweights.

Temporal Distribution: Intraday Skewness

Market activity in India is not uniform throughout the day. It typically follows a “U-Shaped” curve, where volume and turnover are concentrated during the opening 45 minutes (9:15 AM – 10:00 AM) and the closing 45 minutes (2:45 PM – 3:30 PM). Measuring the Skewness of turnover provides insights into whether activity is being front-loaded by institutions or back-loaded by retail squared-off trades.

Morning vs. Evening Concentration

A “Positive Skew” in intraday turnover indicates that the majority of value was exchanged early in the session. This often suggests high-conviction institutional positioning following overnight news. Conversely, a “Negative Skew” indicates that the bulk of the money moved late in the day, often associated with passive index funds rebalancing at the close or retail traders closing intraday positions.

Mathematical Specification: Turnover Temporal Skewness (γ)

γ=1Mm=1MTmμT3σT3

Description: This formula calculates the third standardized moment of turnover over $M$ intraday intervals (e.g., 5-minute bins). It measures the asymmetry of the turnover distribution relative to the mean turnover of the day.

Variable Definitions:

  • $\gamma$ (Resultant): Skewness coefficient.
  • $T_m$ (Variable): Rupee turnover in interval $m$.
  • $\mu_T$ (Parameter): Mean turnover across all intervals in the session.
  • $\sigma_T$ (Parameter): Standard deviation of turnover across intervals.
  • $M$ (Parameter): Total number of intraday intervals.
Python Implementation: Measuring Intraday Skewness
import pandas as pd
import numpy as np
from scipy.stats import skew

def measure_turnover_skew(intraday_df):
"""
Calculates the statistical skewness of intraday turnover distribution.

This metric identifies whether trading activity is concentrated at the
start of the day (Front-loaded) or the end of the day (Back-loaded).

Parameters:
-----------
intraday_df : pd.DataFrame
A DataFrame containing intraday market data.
Requirements:
- Must contain a 'turnover' column (float/int).
- Optionally contains a DatetimeIndex to filter for market hours (09:15-15:30).

Returns:
--------
float
The skewness coefficient:
- Positive (> 0): Activity is front-loaded (Morning dominance).
- Zero (~ 0): Activity is symmetrically distributed (Bell curve).
- Negative (< 0): Activity is back-loaded (Closing, MOC dominance).
"""

# 1. Validation: Ensure required column exists
if 'turnover' not in intraday_df.columns:
raise ValueError("Input DataFrame must contain a 'turnover' column.")

# 2. Pre-processing: Time-based Filtering (Optional but Recommended)
# If the index is a DatetimeIndex, we filter strictly for NSE market hours.
df_clean = intraday_df.copy()
if isinstance(df_clean.index, pd.DatetimeIndex):
# Filter between 09:15 and 15:30
df_clean = df_clean.between_time('09:15', '15:30')

# 3. Data Extraction: Get the turnover series
turnover_series = df_clean['turnover'].values

# 4. Calculation: Compute Fisher-Pearson coefficient of skewness
# We use bias=False to calculate the sample skewness (unbiased estimator)
try:
skew_val = skew(turnover_series, bias=False)
except Exception as e:
print(f"Error calculating skew: {e}")
return 0.0

# Handle NaN result (e.g., if all values are constant or empty)
if np.isnan(skew_val):
return 0.0

return skew_val

# --- Main Execution Block ---
if __name__ == "__main__":
# 1. Setup Dummy Data: Simulating Intraday Bins (1-minute intervals)
# We will simulate a "Front-loaded" day (Morning Spike)
# Create time range
times = pd.date_range("2024-01-01 09:15", "2024-01-01 15:30", freq="1min")
n_bins = len(times)

# Generate Turnover Data: Exponential decay to simulate morning rush
# High at start, tapering off
decay = np.exp(-np.linspace(0, 5, n_bins))
noise = np.random.normal(0, 0.1, n_bins)
turnover_values = (decay + abs(noise)) * 1000000 # Base scale

df_intraday = pd.DataFrame({'turnover': turnover_values}, index=times)

print("--- Input Data Summary ---")
print(f"Total Bins: {len(df_intraday)}")
print(f"First 5 Mins Turnover: {df_intraday['turnover'].head().sum():,.0f}")
print(f"Last 5 Mins Turnover: {df_intraday['turnover'].tail().sum():,.0f}")

# 2. Execute Analysis
print("\n--- Calculating Skewness ---")
skewness_result = measure_turnover_skew(df_intraday)

# 3. Display Results & Interpretation
print(f"Skewness Coefficient: {skewness_result:.4f}")

if skewness_result > 0.5:
print("Interpretation: Strongly Positive (Front-loaded). Traders were active at the Open.")
elif skewness_result < -0.5:
print("Interpretation: Strongly Negative (Back-loaded). Activity spiked near the Close.")
else:
print("Interpretation: Near Zero. Activity was evenly distributed.")

Methodological Definition: Intraday Turnover Skewness

The code above implements a statistical moment analysis to determine the temporal distribution of liquidity during a trading session. By treating the trading day as a probability distribution of turnover, we can quantify whether market participation was aggressive at the “Open” or the “Close”.

Step 1: Series Extraction and Sanitization

The algorithm isolates the Turnover vector (T) from the intraday time-series. A critical pre-processing step involves filtering the data to strictly adhere to market hours (e.g., 09:15 to 15:30 IST) to prevent pre-market block deals from distorting the statistical profile.

Step 2: Computation of the Third Moment

Skewness is the third standardized moment of a distribution. The algorithm calculates this using the Fisher-Pearson coefficient. For a set of turnover values x1…xn, with mean μ and standard deviation σ, the formula is:

Skew = 1n ∑ [ ( (xi – μ) ⁄ σ )3 ]

Step 3: Financial Interpretation

The resulting scalar provides a directional signal regarding market sentiment:

  • Positive Skew (> 0): The distribution has a long tail to the right. In market terms, the “bulk” of the activity occurred early, with outliers (high turnover bins) clustered at the start. This indicates Reactionary Trading (reacting to overnight news).
  • Negative Skew (< 0): The distribution has a long tail to the left. The “bulk” of activity is late in the day. This indicates Position Adjusting or institutional “Market-On-Close” (MOC) flows.

Step 4: Application in Strategy

This metric is essential for VWAP (Volume Weighted Average Price) execution algorithms. A consistent positive skew profile suggests that execution algorithms should front-load their participation to match the natural liquidity curve of the stock.

Author’s Note: These temporal metrics quantify the uneven distribution of activity throughout the day. They do not suggest execution timing strategies (e.g., “when to buy”), as timing is dependent on individual risk tolerance and liquidity needs.

Fetch-Store-Measure Workflow: Normalization & Skew

  • Fetch: Pull 1-minute or 5-minute intraday snapshots for the target stock. For historical RVOL, daily Bhavcopy data for the last 60 days is required to establish a stable 20-day mean.
  • Store: Store intraday interval data in a specialized “intraday_activity” table. Ensure timestamps are timezone-aware (Asia/Kolkata).
  • Measure: Calculate RVOL at the start of the day (first 15 mins) and compare it to the historical RVOL of that specific time slot. A stock trading 10x its usual 9:15 AM volume is a significant outlier.

Impact on Trading Horizons

  • Short-Term: Intraday skewness identifies where the “liquidity surge” is occurring. If a stock sees a positive skew with rising prices, it indicates aggressive morning buying by institutional desks.
  • Medium-Term: RVOL and RTurn identify stocks entering a “new regime” of interest. A breakout accompanied by an RTurn > 3.0 is statistically more significant than a breakout on low turnover.
  • Long-Term: Sustained increases in RVOL over months indicate institutional accumulation or distribution, suggesting that the stock’s fundamental profile is being re-rated by the market.

By shifting from raw counts to relative and temporal metrics, analysts can filter out market noise. Integrated platforms like TheUniBit provide the historical depth necessary to calculate these benchmarks accurately across thousands of securities.

Advanced Python Implementations & Final Compiled Reference

To conclude this quantitative framework, we transition from foundational metrics to advanced implementations. Professional-grade market analysis requires encapsulating logic into reusable classes and leveraging high-performance libraries to handle the millions of data points generated by the National Stock Exchange (NSE) and Bombay Stock Exchange (BSE).

Custom Class Implementation: EquityActivityTracker

The EquityActivityTracker class acts as a wrapper to handle multi-stock comparisons. It integrates volume, turnover, and delivery data, allowing for complex queries such as identifying stocks where turnover is rising but deliverable quantity is falling (a sign of increasing speculative intraday activity).

Python Code: EquityActivityTracker Wrapper
import pandas as pd
import numpy as np

class EquityActivityTracker:
def __init__(self, ticker_data, shares_outstanding=None):
"""
Initializes the tracker with historical market data.

Parameters:
-----------
ticker_data : pd.DataFrame
DataFrame containing the following columns:
- 'date': DateTime object.
- 'close': Closing price of the asset.
- 'volume': Total traded quantity.
- 'turnover': Total traded value (Price * Volume).
- 'delivery_qty': Quantity marked for delivery (optional, required for divergence).
shares_outstanding : int, optional
Total number of shares issued by the company.
Required for calculating true 'Turnover Velocity'.
"""
# Ensure data is sorted by date for rolling calculations
self.df = ticker_data.sort_values('date').copy()
self.shares_outstanding = shares_outstanding

def get_liquidity_regime(self, window=20):
"""
Calculates liquidity metrics including Turnover Velocity and Z-Scores.

Parameters:
-----------
window : int
Lookback period for rolling statistics (default 20 days).

Returns:
--------
pd.DataFrame
The last record containing the computed 'turnover_zscore'.
"""

# 1. Calculate Turnover Velocity
# Definition: The ratio of shares traded to total shares outstanding.
# If shares_outstanding is not provided, we default to Raw Volume (Turnover / Close)
# though this is less accurate for 'Velocity'.
if self.shares_outstanding:
self.df['turnover_velocity'] = (self.df['volume'] / self.shares_outstanding) * 100
else:
# Fallback: Just Volume (Turnover / Close = Volume)
# This is technically 'Traded Volume', not 'Velocity', but matches the user snippet logic.
self.df['turnover_velocity'] = self.df['turnover'] / self.df['close']

# 2. Calculate Rolling Statistical Measures
# Mean Turnover over the window
self.df['avg_turnover'] = self.df['turnover'].rolling(window=window).mean()

# Standard Deviation of Turnover over the window
self.df['std_turnover'] = self.df['turnover'].rolling(window=window).std()

# 3. Z-Score Calculation (Standard Score)
# Measures how many standard deviations the current turnover is from the mean.
# Z > 3 implies extreme anomaly (Liquidity Event).
# We use a small epsilon (1e-9) to avoid DivisionByZero if std is 0.
self.df['turnover_zscore'] = (
(self.df['turnover'] - self.df['avg_turnover']) /
(self.df['std_turnover'] + 1e-9)
)

# Returning the specific analytical columns for the latest date
return self.df[['date', 'turnover', 'turnover_zscore', 'turnover_velocity']].tail(1)

def identify_divergence(self):
"""
Identifies 'Speculative Divergence': Days with High Volume but Low Delivery.

Logic:
If trading activity is high but investors are not taking delivery,
the activity is likely driven by intraday speculation (Day Trading).

Returns:
--------
pd.DataFrame
Subset of days where Delivery % is below the historical median.
"""
# 1. Validation: Check for delivery data
if 'delivery_qty' not in self.df.columns:
raise ValueError("Dataframe requires 'delivery_qty' column for divergence analysis.")

# 2. Calculate Delivery Percentage
# (Delivery / Total Volume) * 100
# We assume volume > 0 to avoid errors, or handle fillna
self.df['delivery_pct'] = (self.df['delivery_qty'] / self.df['volume']) * 100

# 3. Determine the Threshold (Median)
# We use the median as a robust baseline for "normal" delivery behavior
median_delivery = self.df['delivery_pct'].median()

# 4. Filter for Divergence
# Condition: Delivery % is strictly less than the median
divergence_df = self.df[self.df['delivery_pct'] < median_delivery].copy()

return divergence_df[['date', 'close', 'volume', 'delivery_pct']]

# --- Main Execution Block ---
if __name__ == "__main__":
# 1. Setup Dummy Data (25 Days)
dates = pd.date_range(start='2024-01-01', periods=25, freq='B')

# Simulate data
data = {
'date': dates,
'close': np.random.uniform(100, 110, size=25),
'volume': np.random.randint(100000, 500000, size=25),
'delivery_qty': np.random.randint(20000, 100000, size=25) # Partial delivery
}
df_market = pd.DataFrame(data)

# Derive Turnover
df_market['turnover'] = df_market['volume'] * df_market['close']

# Introduce a 'Speculative Spike' on the last day
# High Volume, Low Delivery
df_market.iloc[-1, df_market.columns.get_loc('volume')] = 2000000 # Massive volume
df_market.iloc[-1, df_market.columns.get_loc('delivery_qty')] = 50000 # Low delivery
df_market.iloc[-1, df_market.columns.get_loc('turnover')] = 2000000 * 105

print("--- Initializing Tracker ---")
# We assume 10 Million shares outstanding for accurate velocity calc
tracker = EquityActivityTracker(df_market, shares_outstanding=10000000)

# 2. Test Liquidity Regime
print("\n--- Liquidity Regime (Latest) ---")
regime = tracker.get_liquidity_regime(window=20)
print(regime.to_string(index=False))

# Interpretation of Z-Score
z_score = regime['turnover_zscore'].iloc[0]
if z_score > 3:
print(f"Alert: Extreme Liquidity Event detected (Z-Score: {z_score:.2f})")

# 3. Test Divergence Analysis
print("\n--- Speculative Divergence (Last 5 Hits) ---")
divergence = tracker.identify_divergence()
print(divergence.tail().to_string(index=False))

Methodological Definition: Equity Activity & Liquidity Tracking

The EquityActivityTracker class serves as a quantitative engine for profiling the “Quality” of market participation. By decoupling raw volume from delivery data, it distinguishes between genuine accumulation (investment) and intraday churn (speculation).

Step 1: Metric Normalization (Turnover Velocity)

To compare activity across different timeframes or different assets, we calculate Turnover Velocity. This normalizes the traded volume against the total float of the company.

Velocity (%) = (Volumetraded ⁄ Sharesoutstanding) × 100

If Sharesoutstanding is unavailable, the code falls back to raw volume, though this reduces the metric’s comparative power.

Step 2: Statistical Anomaly Detection (Z-Score)

To identify liquidity shocks, the system employs a Z-Score methodology based on a rolling window (default n=20). This quantifies how unusual the current turnover is relative to recent history.

Z = (Tcurrent – μturnover) ⁄ σturnover

  • μ (Mu): Rolling Mean of Turnover.
  • σ (Sigma): Rolling Standard Deviation.
  • Interpretation: A Z-Score > 3.0 indicates a statistically significant liquidity event (99.7% confidence interval outlier).

Step 3: Divergence Analysis (Speculation vs. Investment)

This module isolates “Low Quality” rallies. It calculates the Delivery Percentage and compares it to the historical median.

Delivery % = (Qtydelivery ⁄ Qtytraded) × 100

The Divergence Signal:
Condition: Price ↑ AND Volume ↑ BUT Delivery % ↓ (below Median).
Implication: The price move is supported by intraday speculators who squared off positions before close, rather than investors taking ownership. This often precedes a reversal.

Final Compiled Reference Section

This section consolidates the mathematical specifications and curated resources essential for building a production-ready market monitoring system.

A. All Mathematical Specifications

Beyond basic volume and turnover, these secondary ratios provide deeper insights into the “velocity” of money and the stability of trading interest.

Share Turnover Ratio (STR)

The Share Turnover Ratio measures how many times the entire equity base of a company has changed hands during a period. It is a standardized metric for comparing liquidity across different market caps.

Mathematical Specification: Share Turnover Ratio (STR)

STR=t=1NVtSoutstanding

Description: STR is the ratio of total shares traded ($V$) over $N$ days to the total shares outstanding. An STR of 1.0 (or 100%) implies the entire company’s float has theoretically traded once in that period.

Variable Definitions:

  • STR (Resultant): Share Turnover Ratio (expressed as a decimal or percentage).
  • $V_t$ (Summand): Traded volume on day $t$.
  • $S_{outstanding}$ (Constant): Total number of issued shares for the security.

Volume Stability Metric (VSM)

VSM quantifies how consistent the trading activity is. High-stability stocks are easier for institutions to enter without moving the price, whereas low-stability stocks are prone to “liquidity vacuums.”

Mathematical Specification: Volume Stability Metric (VSM)

VSM=1σVμV

Description: VSM is the inverse of the Coefficient of Variation ($CV$) of volume. It represents how tightly daily volume clusters around its mean. Values closer to 1.0 indicate highly predictable, stable liquidity.

Variable Definitions:

  • $\sigma_V$ (Coefficient): Standard deviation of volume over the window.
  • $\mu_V$ (Mean): Average volume over the window.
  • VSM (Resultant): Stability score.

B. Official Data Sources & News Triggers

Reliable quantitative analysis is only as good as the underlying data. In the Indian market, specific triggers can cause massive deviations in turnover that do not reflect organic retail interest.

  • Official Sources:
    • NSE India Daily Bhavcopy: The primary record for Volume, Turnover, and Delivery stats.
    • SEBI Monthly Bulletin: Provides macro-level turnover aggregation across cash and derivatives segments.
    • BSE Historical Archives: Essential for cross-verifying splits and dividends that affect share counts.
  • News Triggers:
    • Block Deals: Large trades (min 5 lakh shares or ₹10 Cr value) executed in a separate window. They create massive turnover spikes without impacting the public order book.
    • Bulk Deals: Transactions involving >0.5% of a company’s total shares executed during regular hours. These are immediate “Turnover Outliers.”
    • Rebalancing Dates: MSCI or FTSE index rebalancing days often see “Turnover Concentration” in the final 30 minutes of trade.

C. Python-Friendly APIs & Libraries

To implement the “Fetch-Store-Measure” workflow, these libraries are the industry standard for Indian equity data:

  • nsepython: The most direct wrapper for NSE’s live and historical endpoints. Best for real-time turnover monitoring.
  • jugaad-data: Optimized for bulk-downloading daily Bhavcopy files and storing them in local formats.
  • nselib: Useful for handling market holiday calendars, ensuring your time-series calculations (like 20-day MAs) are correct.
  • SQLAlchemy: The preferred ORM for interacting with PostgreSQL/TimescaleDB for large-scale storage.

Strategic Summary: Long-Term Impact

In the Long-Term, monitoring market-wide Rupee Turnover is a proxy for the total equity participation in the Indian economy. As turnover grows relative to India’s GDP, it indicates a maturing, more liquid capital market. For individual stocks, a long-term rise in Turnover Velocity often precedes inclusion in major indices like the Nifty 50, as liquidity is a primary requirement for index membership.

By integrating these Pythonic tools and mathematical frameworks, analysts can move beyond simple price-action to understand the monetary forces driving the Indian stock market. For those seeking high-fidelity data streams to feed these models, TheUniBit provides a comprehensive ecosystem for advanced equity research.

Scroll to Top