- Formula Components and Variable Explanation
- Constraint Logic and Structural Integrity
- Workflow: Fetch → Store → Measure
- The Logic of Normalization in Delivery Storage
- Absorption Metric Variable Explanation
- Long-Term (Strategic Holding)
- SCM Formula and Variable Explanation
- Formula Components and Variable Explanation
- Summary of Structural Impact
The Anatomy of a Trade: Settlement vs. Churn
In the high-velocity environment of the Indian equity markets, every recorded transaction on the National Stock Exchange (NSE) or the Bombay Stock Exchange (BSE) carries a hidden structural DNA. While the ticker tape displays a unified stream of volume, that volume is bifurcated into two distinct economic activities: speculative “churn” and structural “settlement.” Understanding the distinction between “Permanent” and “Temporary” volume is essential for any participant looking to decipher the underlying architecture of market movements.
Temporary volume, or intraday churn, represents positions that are entered and exited within the same trading session. These trades provide essential liquidity and aid in price discovery but do not result in a change of ownership at the depository level (NSDL/CDSL). Conversely, Permanent volume—represented by Delivery—indicates a complete transfer of assets. When a trade is marked for delivery, the buyer intends to hold the security in their demat account, and the seller relinquishes their stake. This “Structural Trading Statistic” known as Delivery Percentage serves as the ultimate filter, separating noise from conviction.
Defining Delivery Percentage
Delivery Percentage is a proportional metric that quantifies the fraction of total trading activity that concludes in an actual exchange of securities. Unlike raw volume, which can be artificially inflated by high-frequency trading (HFT) or scalping, the delivery ratio provides a stabilized view of how much capital is actually being committed to or withdrawn from a stock’s float. It is the bridge between the chaotic “Trade-to-Trade” environment and the orderly “Settlement” cycle.
For a specialized software partner like TheUniBit, the focus is on engineering the systems that extract this structural signal. By utilizing Python to build automated “Bhavcopy” (market report) parsers, developers can reconcile gross trading intensity with net settlement data. This allows traders to move beyond sentiment-based narratives and focus on the hard mathematical reality of market composition, ensuring that data pipelines are robust enough to handle the late-evening exchange updates where delivery figures are finalized.
Mathematical Specification & Formulaic Definition
To analyze the structural integrity of a trading session, we must define the relationship between the gross activity and the settled outcome. The Delivery Percentage is the primary resultant of this relationship, acting as a coefficient of investment intent within the broader sea of liquidity.
Mathematical Definition of Delivery Percentage
Python Implementation of Delivery Percentage Calculation
import sys
def calculate_delivery_percentage(deliverable_qty: int, total_traded_qty: int) -> float:
"""
Calculates the structural delivery ratio (Delivery Percentage) for a given security.
This function determines the proportion of trading volume that resulted in actual
transfer of ownership (delivery) versus intra-day squaring off.
Parameters:
----------
deliverable_qty (int):
The quantity of shares marked for settlement (delivery).
Represents the 'real' demand/supply excluding speculation.
total_traded_qty (int):
The total gross volume of shares traded during the session.
Includes both delivery and intraday (squared-off) trades.
Returns:
-------
float:
The Delivery Percentage rounded to two decimal places.
Returns 0.0 if no trading occurred (Total Quantity is 0).
Formula:
-------
Delivery % = (Deliverable Quantity / Total Traded Quantity) * 100
"""
# Validation: Prevent ZeroDivisionError if a stock is halted or has zero volume.
# In market data terms, zero volume implies zero delivery.
if total_traded_qty == 0:
return 0.0
# core calculation: cast to float implicitly during division
# We multiply by 100 to convert the ratio into a percentage.
delivery_pct = (deliverable_qty / total_traded_qty) * 100
# Rounding to 2 decimal places is standard for NSE/BSE reporting.
return round(delivery_pct, 2)
# --- Execution Block ---
if __name__ == "__main__":
# Example usage mimicking NSE (National Stock Exchange of India) data format
# Scenario: A liquid large-cap stock
v_total = 1500000 # Total Traded Quantity
v_del = 450000 # Deliverable Quantity (Shares actually changing hands)
try:
# Calculate the metric
result = calculate_delivery_percentage(v_del, v_total)
# Output the result
print("-" * 40)
print("Market Data Analysis: Delivery Statistics")
print("-" * 40)
print(f"Total Traded Quantity : {v_total:,}")
print(f"Deliverable Quantity : {v_del:,}")
print("-" * 40)
print(f"Structural Delivery Percentage: {result}%")
print("-" * 40)
# Interpretation logic (Simple threshold check)
if result > 40.0:
print("Interpretation: High accumulation/delivery observed.")
else:
print("Interpretation: High speculative/intraday activity.")
except Exception as e:
print(f"An error occurred during calculation: {e}")
Methodological Definition: Structural Delivery Ratio
The code implements a financial algorithm to quantify the quality of trading volume. It isolates the fraction of trades that result in the actual transfer of securities between demat accounts (delivery) from the total turnover, which includes speculative intraday squaring-off.
Mathematical Specification
The algorithm utilizes the standard ratio formula for delivery percentage, denoted as . The calculation is performed as follows:
Where:
- Qdel represents the Deliverable Quantity (Client-Level Settlement).
- Qtotal represents the Total Traded Quantity (Gross Volume).
Step-by-Step Algorithmic Logic
- Step 1: Input Validation & Zero-Volume Guard
The system first checks the denominator, Total Traded Quantity (). If this value is equivalent to zero (0), the function immediately terminates and returns 0.0. This prevents a computational “Division by Zero” error in scenarios where a security is halted or has no liquidity during a session. - Step 2: Ratio Computation
The function divides the Deliverable Quantity by the Total Traded Quantity. This yields a raw decimal ratio (e.g., 0.30 for 30%). - Step 3: Percentage Conversion
The decimal ratio is multiplied by 100 to convert it into a standard percentage format. - Step 4: Precision Adjustment
The final result is processed through a rounding function to restrict the output to 2 decimal places (e.g., converting 33.3333… to 33.33). This ensures the output adheres to standard financial reporting formats used by exchanges like the NSE and BSE.
Formula Components and Variable Explanation
The formula for Delivery Percentage (DP) is a simple but powerful ratio. It operates on the principle of isolation—isolating the “settled” portion of the volume from the “gross” portion to determine the degree of ownership transfer. This metric is bounded between 0 and 100, though in a functional market like India, a 0% delivery is practically impossible for liquid stocks due to mandatory settlement rules for certain categories.
- DPt (Resultant): The Delivery Percentage at time t. It represents the efficiency of volume conversion into equity holdings.
- Vdel (Numerator): The Deliverable Quantity. This is the total number of shares where the buyer and seller have opted for settlement rather than squaring off the position intraday.
- Vtotal (Denominator): Total Traded Quantity. The sum of all buy and sell matches executed on the exchange during the session.
- 100 (Constant/Multiplier): A scaling factor used to express the ratio as a percentage for easier cross-asset comparison.
- × (Operator): The multiplication operator used to scale the fraction.
- / (Operator): The division operator representing the ratio of settlement to total activity.
Constraint Logic and Structural Integrity
In the Indian equity market, specifically for the “Cash” segment, certain constraints apply to these variables. The Deliverable Quantity (Vdel) can never exceed the Total Traded Quantity (Vtotal), as you cannot settle more shares than were traded. Mathematically, this implies that the ratio will always exist within the domain [0, 1]. Any data point suggesting a ratio > 100% is a flag for data corruption or a corporate action adjustment error.
Furthermore, different stock categories (Series) on the NSE have different delivery floors. For instance, stocks in the “Trade-for-Trade” (BE series) segment have a structural requirement of 100% delivery, meaning every single trade must result in a settlement. Analyzing these constraints allows a Python-driven system to validate the integrity of the data being ingested.
Data Architecture: Fetch → Store → Measure Workflow
To transform raw exchange reports into actionable structural statistics, a rigorous data pipeline is required. This workflow ensures that the latency between exchange release and database persistence is minimized, and that the data is stored in a format optimized for time-series analysis.
A. Data Fetch: The Ingestion Layer
The primary source for delivery data in India is the daily “Bhavcopy” and the “Security-wise Delivery Position” report released by the NSE and BSE. Unlike price data, which is available via web-sockets in real-time, delivery statistics are usually updated once per day, typically between 6:00 PM and 9:00 PM IST, after the clearing corporations have finalized the day’s obligations.
Python Script for Fetching NSE Delivery Data
import requests
import pandas as pd
from io import StringIO
from typing import Optional
def fetch_nse_delivery_report(date_str: str) -> Optional[pd.DataFrame]:
"""
Fetches the security-wise delivery report (Bhavcopy) from the NSE Archives.
This function targets the 'Full' version of the Bhavcopy which includes
delivery quantities and delivery percentages for all listed securities.
Parameters:
----------
date_str (str):
The target trading date in 'DDMMYYYY' format.
Example: '08012026' for 8th January 2026.
Returns:
-------
Optional[pd.DataFrame]:
A pandas DataFrame containing the trade data if successful.
Returns None if the server returns a non-200 status code (e.g., 404 for holidays).
Technical Details:
-----------------
- URL Endpoint: archives.nseindia.com/products/content/sec_bhavdata_full_{date_str}.csv
- Headers: Uses a User-Agent to mimic a standard browser request, preventing
immediate blocking by NSE's basic bot filters.
"""
# Construct the dynamic URL based on the provided date
base_url = "https://archives.nseindia.com/products/content/sec_bhavdata_full_{}.csv"
target_url = base_url.format(date_str)
# Headers are crucial for NSE. Without a User-Agent, the request is often rejected (403 Forbidden).
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
}
try:
# Initiate HTTP GET request
print(f"Requesting data from: {target_url}")
response = requests.get(target_url, headers=headers, timeout=10)
# HTTP 200 OK: The request has succeeded
if response.status_code == 200:
# The response content is a raw string of CSV data.
# We wrap it in StringIO to create a file-like object compatible with pandas.
csv_data = StringIO(response.text)
# Parse CSV into a DataFrame
df = pd.read_csv(csv_data)
# Data Cleaning: NSE CSV headers often contain trailing whitespaces
# (e.g., ' DATE1 ' instead of 'DATE1'). We strip them for consistency.
df.columns = [col.strip() for col in df.columns]
print("Data fetched and parsed successfully.")
return df
else:
# Handle cases like Market Holidays (404) or Server Errors (5xx)
print(f"Failed to fetch data. Status Code: {response.status_code}")
return None
except requests.exceptions.RequestException as e:
print(f"Network error occurred: {e}")
return None
# --- Execution Block ---
if __name__ == "__main__":
# Example Usage
# Note: Ensure the date corresponds to a valid trading day (excluding weekends/holidays).
# Using '09012026' (January 9, 2026) as an example.
test_date = "09012026"
delivery_df = fetch_nse_delivery_report(test_date)
if delivery_df is not None:
# Display the first few rows to verify structure
print("-" * 50)
print(f"NSE Delivery Report for {test_date}")
print("-" * 50)
print(delivery_df.head())
print("-" * 50)
# Verify specific columns exist
if 'DELIV_QTY' in delivery_df.columns:
print("Verification: 'DELIV_QTY' column is present.")
else:
print("No data available for the specified date.")
Methodological Definition: Programmatic Data Acquisition
The code implements an automated pipeline to retrieve Bhavcopy data (market reports) directly from the exchange’s archival server. This process bypasses manual downloading by simulating a client-server interaction via HTTP.
Workflow Specification
- Step 1: Endpoint Construction
The algorithm dynamically generates the target Uniform Resource Locator (URL) by injecting the specific date parameter into the standard NSE archive schema.
Schema:.../sec_bhavdata_full_DDMMYYYY.csv - Step 2: Header Masquerading
To successfully negotiate with the server, the request includes a customUser-Agentheader. This ensures the programmatic request identifies itself as a standard web browser (e.g., Chrome or Mozilla), reducing the probability of a Forbidden response. - Step 3: In-Memory Stream Processing
Upon receiving a OK status, the raw text payload is not written to the physical disk. Instead, it is wrapped in aStringIObuffer. This treats the text string as an in-memory file object, optimizing Input/Output (I/O) latency. - Step 4: Tabular Parsing & Normalization
The system utilizes the Pandas library to parse the CSV stream into a DataFrame. A critical normalization step iterates through the column headers to strip extraneous whitespace (e.g., converting" SERIES "→"SERIES"), ensuring data addressability.
Mathematical Data Structure
The resulting output is a matrix where rows represent unique securities and columns represent market variables.
Workflow: Fetch → Store → Measure
The “Fetch” stage identifies the correct endpoint and handles the HTTP request/response cycle. The “Store” stage involves parsing the CSV/DAT files and inserting them into a structured environment like PostgreSQL. Finally, the “Measure” stage involves applying the mathematical formulas defined earlier to create derived metrics like the 20-day Moving Average of Delivery (MAD).
For short-term traders, this workflow highlights “Settlement Pressure”—high volume with low delivery suggests a purely speculative move that may reverse quickly. For long-term investors, the “Measure” stage identifies “Absorption”—consistent high delivery at specific price levels indicates institutional accumulation. TheUniBit specializes in automating these transitions, ensuring that the ‘Measure’ stage is not just a calculation, but a trigger for strategic decision-making.
B. Storage Design: The Persistence Layer
Because delivery statistics are released asynchronously compared to OHLC (Open, High, Low, Close) price data, the database architecture must support “late-binding” updates. A specialized software approach involves a decoupled schema where price and delivery data are treated as separate time-series streams that join on a composite key of Date and Symbol. This prevents the primary price data from being locked or delayed by the exchange’s delivery reporting latency.
Database Schema Design (SQLAlchemy ORM)
from sqlalchemy import Column, Date, String, BigInteger, Numeric, create_engine
from sqlalchemy.orm import declarative_base, sessionmaker
# 1. Initialize the Base Class
# This acts as the catalog for all table definitions in the ORM (Object Relational Mapper).
Base = declarative_base()
class EquityDelivery(Base):
"""
Represents the schema for the 'fact_equity_delivery' table.
This table is designed to store daily delivery statistics for equities.
It uses a Composite Primary Key (trade_date + symbol) to ensure that
each stock has only one unique entry per trading day.
"""
# The actual name of the table in the database
__tablename__ = 'fact_equity_delivery'
# --- Column Definitions ---
# Trade Date: Part of the Composite Primary Key.
# Stores the date of the market session.
trade_date = Column(Date, primary_key=True, nullable=False)
# Symbol: Part of the Composite Primary Key.
# Stores the ticker symbol (e.g., 'RELIANCE', 'TCS').
# Length 25 covers even the longest NSE indices/ETF symbols.
symbol = Column(String(25), primary_key=True, nullable=False)
# Series: EQ (Equity), BE (Book Entry), etc.
# Important because delivery rules differ by series.
series = Column(String(5), nullable=False)
# Total Volume: Gross quantity traded.
# BigInteger is required because volumes can exceed 2 billion (integer limit)
# on high-activity days (e.g., Penny stocks or Vodafone Idea).
total_volume = Column(BigInteger, nullable=False)
# Delivery Volume: Quantity marked for settlement.
delivery_volume = Column(BigInteger, nullable=False)
# Delivery Percentage: The calculated ratio (0-100).
# Numeric(5, 2) allows for values up to 100.00.
# We avoid Float/Double to prevent floating-point precision errors.
delivery_percentage = Column(Numeric(5, 2), nullable=True)
def __repr__(self):
"""String representation for debugging."""
return f"<EquityDelivery(date='{self.trade_date}', symbol='{self.symbol}', del_pct={self.delivery_percentage})>"
# --- Execution Block ---
if __name__ == "__main__":
try:
print("Initializing Database Schema...")
# NOTE: For immediate execution without a PostgreSQL server, we use SQLite in memory.
# To use PostgreSQL, swap the string below with: 'postgresql://user:pass@localhost/market_data'
db_connection_str = 'sqlite:///:memory:'
# Create the engine (the core interface to the database)
engine = create_engine(db_connection_str, echo=False)
# Create Tables
# This commands the engine to generate SQL 'CREATE TABLE' statements
# for all classes inheriting from Base.
Base.metadata.create_all(engine)
print("Table 'fact_equity_delivery' created successfully.")
# --- Test Data Insertion ---
print("\nTesting Data Persistence...")
# Create a session to interact with the database
Session = sessionmaker(bind=engine)
session = Session()
# Create a dummy record (mimicking NSE data)
sample_record = EquityDelivery(
trade_date='2026-01-09',
symbol='RELIANCE',
series='EQ',
total_volume=5000000,
delivery_volume=2500000,
delivery_percentage=50.00
)
# Add and Commit
session.add(sample_record)
session.commit()
print(f"Record inserted: {sample_record}")
# --- Verification Query ---
fetched_record = session.query(EquityDelivery).filter_by(symbol='RELIANCE').first()
print(f"Retrieved from DB: Symbol={fetched_record.symbol}, Delivery%={fetched_record.delivery_percentage}")
except Exception as e:
print(f"Database Error: {e}")
Methodological Definition: Persistence Layer Design
The code defines the Schema Architecture for storing high-volume financial data. It utilizes an Object-Relational Mapping (ORM) technique to translate Python class structures into SQL DDL (Data Definition Language) commands. This ensures that the analytical metrics calculated in previous steps are persisted reliably for historical time-series analysis.
Mathematical Specification: Relational Schema
The database table is defined as a relation consisting of a tuple of attributes. The schema enforces a Composite Primary Key constraint to guarantee uniqueness across the temporal dimension.
Where the Primary Key is defined as:
Step-by-Step Logic
- Step 1: ORM Base Initialization
The system initializes aDeclarativeBase. This acts as a metaclass registry, tracking all defined data models to ensure they are synchronized with the database engine. - Step 2: Attribute Mapping (Typology)
The Python class attributes are mapped to specific SQL types to optimize storage:- BigInteger: Used for volume columns () to handle large-cap liquidity (exceeding ).
- Numeric(5, 2): Used for percentages to ensure fixed-point precision (e.g.,
99.99), avoiding the floating-point errors common in standard scientific notation.
- Step 3: Constraint Definition
The schema explicitly setsprimary_key=Trueon two columns: Trade_Date and Symbol. This constraint prevents data duplication, ensuring that for any specific date, a specific stock ticker appears exactly once in the dataset.
The Logic of Normalization in Delivery Storage
Normalization is critical for maintaining structural accuracy over time. When a corporate action such as a 1:10 stock split occurs, the raw Total Volume and Delivery Volume will increase by a factor of 10. However, the Delivery Percentage remains a structurally stable ratio. By storing the ratio alongside the raw counts, Python-based analytical engines can perform historical backtesting without needing to manually adjust volume figures for every split or bonus issue, as the ratio is inherently self-adjusting.
C. Measurement & Transformation: The Analytics Layer
The transformation layer is where raw delivery counts are converted into “Structural Trading Statistics.” The most vital transformation is the normalization of delivery against its own historical baseline. A single day of 60% delivery is meaningless unless compared to the stock’s structural norm. TheUniBit leverages Python’s pandas library to vectorize these calculations, allowing for the rapid processing of thousands of stocks simultaneously.
Python Code: Calculating Normalized Delivery Baselines
import pandas as pd
import numpy as np
def calculate_structural_delivery_ratio(df: pd.DataFrame, window: int = 20) -> pd.DataFrame:
"""
Measures the current delivery relative to its rolling historical average using Z-Score analysis.
This function identifies 'Structural Breaks' in delivery patterns. A high positive Z-Score
indicates delivery accumulation significantly above the normal baseline, potentially
signaling institutional buying.
Parameters:
----------
df (pd.DataFrame):
Input Dataframe containing a 'delivery_percentage' column.
It is assumed the DataFrame is sorted by date (ascending).
window (int):
The lookback period for the structural baseline (default is 20 days/1 month).
Returns:
-------
pd.DataFrame:
The original DataFrame augmented with:
- 'rolling_delivery_avg': The moving average of delivery %.
- 'delivery_std': The rolling standard deviation.
- 'delivery_z_score': The statistical deviation from the mean.
"""
# Ensure we work on a copy to avoid SettingWithCopy warnings on the original object
df = df.copy()
# 1. Calculate the Rolling Mean (The Structural Baseline)
# This smoothens out daily noise to reveal the 'normal' delivery trend.
df['rolling_delivery_avg'] = df['delivery_percentage'].rolling(window=window).mean()
# 2. Calculate the Rolling Standard Deviation (Volatility of Delivery)
# This measures how much the delivery % typically fluctuates.
df['delivery_std'] = df['delivery_percentage'].rolling(window=window).std()
# 3. Calculate the Z-Score (Standardized Anomaly Metric)
# Formula: (Current Value - Mean) / Standard Deviation
# A Z-score > 2.0 implies the current delivery is statistically significant (2 sigmas away).
df['delivery_z_score'] = (
(df['delivery_percentage'] - df['rolling_delivery_avg']) / df['delivery_std']
)
return df
# --- Execution Block ---
if __name__ == "__main__":
# 1. Create Dummy Data mimicking a 30-day trading history
# We simulate a "normal" market followed by a "breakout" in delivery.
data = {
'date': pd.date_range(start='2026-01-01', periods=30, freq='B'), # 'B' = Business days
'delivery_percentage': [30, 32, 29, 31, 33, 30, 28, 35, 30, 32] * 3 # Baseline ~31%
}
# Inject a structural break (Massive delivery spike) in the last 2 days
data['delivery_percentage'][-2] = 65 # Sudden jump to 65%
data['delivery_percentage'][-1] = 70 # Follow through at 70%
df_market = pd.DataFrame(data)
# 2. Apply the Algorithm
# Using a 10-day window for this small dataset example
processed_df = calculate_structural_delivery_ratio(df_market, window=10)
# 3. Display Results (Focusing on the anomaly)
print("-" * 60)
print("Structural Delivery Analysis (Last 5 Days)")
print("-" * 60)
# We drop NaNs (which exist for the first 'window' days) for cleaner viewing
print(processed_df[['date', 'delivery_percentage', 'rolling_delivery_avg', 'delivery_z_score']].tail(5).to_string(index=False))
print("-" * 60)
# 4. Interpretation Logic
last_z_score = processed_df['delivery_z_score'].iloc[-1]
print(f"Latest Z-Score: {last_z_score:.2f}")
if last_z_score > 2.0:
print("SIGNAL: Statistically significant accumulation detected (> 2 Sigma).")
elif last_z_score < -2.0:
print("SIGNAL: Statistically significant distribution detected.")
else:
print("SIGNAL: Delivery activity is within normal historical bounds.")
Methodological Definition: Structural Break Detection
The code implements a statistical normalization technique known as Z-Score Analysis. Rather than viewing delivery percentages in isolation (absolute terms), this method evaluates the current session’s activity relative to its own recent history. This allows analysts to distinguish between routine fluctuations and significant “structural breaks” that often precede price trend reversals.
Mathematical Specification
The algorithm standardizes the delivery data using a rolling window . The Z-Score at time is calculated as:
Where:
- Xt is the Delivery Percentage at time .
- μt is the Rolling Mean (Baseline) over the window :
- σt is the Rolling Standard Deviation (Volatility) over the same window.
Step-by-Step Logic
- Step 1: Baseline Establishment (Moving Average)
The system calculates the simple moving average of the delivery percentage over the specified lookback period (e.g., 20 days). This line represents the “expected” behavior of the stock. - Step 2: Volatility Measurement (Standard Deviation)
The system quantifies the noise or variability in the data. A stock with stable delivery (e.g., consistently 30%) has low deviation, while erratic stocks have high deviation. - Step 3: Standardization (Z-Score Computation)
The current day’s delivery is compared to the baseline.- If : The current delivery is 2 standard deviations above average (statistical anomaly), suggesting strong “hidden” accumulation.
- If : The activity is perfectly average.
Structural Impact on Trading Horizons
The interpretation of Delivery Percentage as a structural statistic varies significantly across different timeframes. Unlike price indicators, which often generate “buy” or “sell” signals, delivery metrics indicate the quality and sustainability of the current market structure.
Short-Term (Intraday to T+2)
In the short term, Delivery Percentage serves as a proxy for “Settlement Pressure.” When a stock experiences a sharp price move accompanied by a significant drop in its delivery ratio (relative to its structural baseline), it indicates that the move is driven by intraday speculators. Such moves are often “hollow” and susceptible to mean reversion once the intraday positions are squared off.
- Settlement Pressure: High price volatility + Low delivery % = High probability of an intraday reversal.
- Float Availability: High delivery days effectively remove shares from the “active float” for the T+2 settlement cycle, potentially tightening liquidity for the subsequent sessions.
Medium-Term (Weeks to Months)
Over a medium-term horizon, the delivery ratio acts as an “Absorption Metric.” By aggregating the total deliverable volume over a period, traders can estimate the amount of paper being “locked up” by institutional or long-term players. If a stock is consolidating within a range but shows a rising structural delivery average, it suggests that the floating supply is being absorbed by stronger hands.
Mathematical Definition of the Absorption Metric (AM)
Absorption Metric Variable Explanation
The Absorption Metric (AM) is a summation of deliverable quantities over a specific lookback period n. It represents the cumulative volume that has transitioned from the marketplace into demat accounts.
- AMn (Resultant): The total cumulative delivery over n periods.
- ∑ (Operator): The summation symbol representing the aggregate of daily deliverable volumes.
- Vdel,i (Summand): The deliverable quantity for each specific day i within the sequence.
- n (Limit): The total number of trading sessions in the lookback window.
Long-Term (Strategic Holding)
For long-term strategic analysis, the market bifurcates stocks into “Locked Structures” and “Trading Vehicles.” Stocks with consistently high structural delivery (often > 60-70%) are typically investment-grade assets where the majority of turnover results in ownership change. Conversely, “Trading Vehicles” exhibit low structural delivery (often < 25%), indicating that the stock is primarily used for speculative churn and HFT activity, which results in higher “noise” and less reliable long-term price trends.
Comparative Structural Analysis
Delivery percentages are not uniform across the market; they are heavily influenced by market capitalization and index inclusion. A 30% delivery in a Small-Cap stock might be exceptionally high, while the same 30% in a Nifty 50 Blue-chip could be alarmingly low. This necessitates a cross-sectional approach to structural statistics.
Large-Cap vs. Small-Cap Structure
Large-cap stocks generally exhibit higher structural delivery floors due to the presence of Institutional Investors (FIIs/DIIs) who rarely trade for intraday gains. Their trades are almost exclusively delivery-based. Small-cap stocks, being more susceptible to retail speculation and operator-driven churn, often show highly erratic delivery patterns. Python scripts can be used to “Percentile Rank” a stock’s delivery within its own market-cap category to find true outliers.
Exchange-Wise Splits (NSE vs. BSE)
A unique quirk of the Indian market is the dual-listing of stocks on both the NSE and BSE. While the NSE typically handles the lion’s share of volume, the BSE often sees different delivery characteristics. Analyzing the delivery percentage split between exchanges can reveal where the “real” investors are active versus where the “traders” are congregating.
Python Code: Cross-Exchange Delivery Comparison
import pandas as pd
def compare_exchange_structure(nse_df: pd.DataFrame, bse_df: pd.DataFrame) -> pd.DataFrame:
"""
Computes the structural difference (Delta) between NSE and BSE delivery patterns.
This analysis helps identify 'Liquidity Preference' and potential arbitrage opportunities.
Significant differences in delivery ratios between exchanges for the same asset
can indicate where the 'smart money' (institutional investors) prefers to settle trades.
Parameters:
----------
nse_df (pd.DataFrame):
DataFrame containing NSE data. Must include columns: 'symbol', 'delivery_percentage'.
bse_df (pd.DataFrame):
DataFrame containing BSE data. Must include columns: 'symbol', 'delivery_percentage'.
Returns:
-------
pd.DataFrame:
A merged DataFrame containing only the common symbols and the 'delivery_diff' metric.
- Positive diff (+): Higher accumulation quality on NSE.
- Negative diff (-): Higher accumulation quality on BSE.
"""
# 1. Merge Datasets (Inner Join)
# We use an inner join to isolate only those securities traded on BOTH exchanges.
# Suffixes are applied to distinguish identical column names (e.g., 'delivery_percentage').
comparison = pd.merge(
nse_df,
bse_df,
on='symbol',
suffixes=('_nse', '_bse')
)
# 2. Compute the Structural Delta
# Formula: NSE Delivery % - BSE Delivery %
comparison['delivery_diff'] = comparison['delivery_percentage_nse'] - comparison['delivery_percentage_bse']
# 3. Filter and Sort for Clarity
# We return the relevant columns, sorted by the absolute magnitude of the difference
# to highlight the most significant anomalies first.
result_df = comparison[['symbol', 'delivery_percentage_nse', 'delivery_percentage_bse', 'delivery_diff']].copy()
# Sorting by magnitude of difference (optional but helpful for analysis)
result_df = result_df.sort_values(by='delivery_diff', ascending=False)
return result_df
# --- Execution Block ---
if __name__ == "__main__":
# 1. Create Dummy Data mimicking concurrent trading on NSE and BSE
# Scenario:
# - RELIANCE: Institutional buying heavily on NSE (Liquid), retail on BSE.
# - TCS: Similar behavior on both.
# - ITC: Higher delivery on BSE (perhaps due to a block deal or specific liquidity pocket).
data_nse = {
'symbol': ['RELIANCE', 'TCS', 'ITC', 'INFY'],
'delivery_percentage': [65.0, 50.0, 40.0, 55.0]
}
data_bse = {
'symbol': ['RELIANCE', 'TCS', 'ITC', 'INFY'],
'delivery_percentage': [30.0, 49.5, 60.0, 56.0]
}
df_nse = pd.DataFrame(data_nse)
df_bse = pd.DataFrame(data_bse)
# 2. Execute Comparison
structural_diff = compare_exchange_structure(df_nse, df_bse)
# 3. Output Results
print("-" * 75)
print("Cross-Exchange Delivery Arbitrage Analysis")
print("-" * 75)
print(f"{'Symbol':<10} | {'NSE Del%':<10} | {'BSE Del%':<10} | {'Delta (NSE-BSE)':<15}")
print("-" * 75)
for index, row in structural_diff.iterrows():
sym = row['symbol']
nse = row['delivery_percentage_nse']
bse = row['delivery_percentage_bse']
diff = row['delivery_diff']
print(f"{sym:<10} | {nse:<10} | {bse:<10} | {diff:<15.2f}")
print("-" * 75)
# 4. Automated Interpretation
# Taking the top disparity
top_pick = structural_diff.iloc[0]
if top_pick['delivery_diff'] > 15:
print(f"Insight: {top_pick['symbol']} shows massive delivery preference on NSE (+{top_pick['delivery_diff']}%)")
print("Hypothesis: Institutional/FII flow likely concentrated on NSE.")
bottom_pick = structural_diff.iloc[-1]
if bottom_pick['delivery_diff'] < -15:
print(f"Insight: {bottom_pick['symbol']} shows massive delivery preference on BSE ({bottom_pick['delivery_diff']}%)")
print("Hypothesis: Possible block deal or promoter activity on BSE.")
Methodological Definition: Cross-Exchange Arbitrage Analysis
The code executes a comparative analysis between two distinct market venues (NSE and BSE). In an efficient market, the “quality” of trading (delivery ratio) for identical assets should theoretically converge. Divergence in these ratios—a structural delta—reveals liquidity preferences. For instance, institutional investors (FIIs/DIIs) typically prefer the exchange with higher depth (NSE), while specific block deals may occur on the BSE to minimize impact cost.
Mathematical Specification
The Structural Delta, denoted as , is calculated for every security present in both subsets.
Where:
- DNSE is the Delivery Percentage on the National Stock Exchange.
- DBSE is the Delivery Percentage on the Bombay Stock Exchange.
Step-by-Step Logic
- Step 1: Data Intersection (Inner Join)
The algorithm merges the two datasets using the “Symbol” as the primary key. An Inner Join is strictly applied to exclude securities that are not cross-listed (e.g., BSE-only SME stocks), ensuring an apple-to-apple comparison. - Step 2: Delta Computation
The system subtracts the BSE delivery ratio from the NSE delivery ratio.- High Positive Delta (+δ): Indicates the NSE is the “Investment Exchange” for this stock, while the BSE is being used primarily for speculative/intraday trading.
- High Negative Delta (-δ): Indicates significant accumulation on the BSE, often a signal of off-market transfers, promoter creeping acquisition, or bulk deals executed away from the primary liquidity center.
Python Implementation: Building the Structural Engine
To move from theoretical understanding to execution, we must build a computational engine capable of identifying structural shifts in real-time. The “Delivery Anomaly Scrutinizer” is an algorithmic framework designed to detect when the delivery composition of a stock deviates significantly from its historical DNA, suggesting a change in the underlying participant mix.
Algorithm: The Delivery Anomaly Scrutinizer
This algorithm functions by establishing a “Structural Baseline” using a rolling window and then calculating the deviation of the current session’s delivery percentage. Unlike price-based oscillators, this engine looks for “Volume Quality” anomalies. If a stock’s price is rising while its delivery percentage is hitting a 52-week low, the engine flags a “Speculative Exhaustion” alert. Conversely, a price drop on a 52-week high delivery percentage flags “Institutional Absorption.”
Python Code: The Delivery Anomaly Scrutinizer Workflow
import pandas as pd
import numpy as np
def delivery_anomaly_scrutinizer(data: pd.DataFrame, threshold: float = 2.0) -> pd.DataFrame:
"""
Identifies structural anomalies where delivery behavior diverges significantly from historical norms.
This algorithm isolates two critical market regimes:
1. Structural Absorption: High Delivery volume during price declines (Institutional Accumulation).
2. Speculative Churn: Low Delivery volume during price rises (Weak Rally/Trap).
Parameters:
----------
data (pd.DataFrame):
Input DataFrame containing historical market data.
Must contain columns: 'close' (Price) and 'delivery_pct' (Delivery Ratio).
Assumed to be sorted by date in ascending order.
threshold (float):
The Z-score cutoff for defining an anomaly.
Default is 2.0 (representing 2 standard deviations, approx 95% confidence interval).
Returns:
-------
pd.DataFrame:
A subset of the original data containing the Z-scores and boolean anomaly flags.
"""
# Work on a copy to prevent SettingWithCopy warnings on the original dataset
df = data.copy()
# ---------------------------------------------------------
# 1. Establish the Structural Baseline (Rolling Statistics)
# ---------------------------------------------------------
# We use a 20-day lookback (approx 1 trading month) to establish what is "Normal".
# Rolling Mean (Expected Delivery %)
df['mu_del'] = df['delivery_pct'].rolling(window=20).mean()
# Rolling Standard Deviation (Volatility of Delivery %)
df['sigma_del'] = df['delivery_pct'].rolling(window=20).std()
# ---------------------------------------------------------
# 2. Calculate the Structural Deviation (Z-Score)
# ---------------------------------------------------------
# The Z-score normalizes the current day's delivery against the baseline.
# Formula: (Current - Mean) / Standard Deviation
df['del_zscore'] = (df['delivery_pct'] - df['mu_del']) / df['sigma_del']
# ---------------------------------------------------------
# 3. Define the Anomaly Logic (Signal Generation)
# ---------------------------------------------------------
# SCENARIO A: Structural Absorption
# Logic: Delivery is exceptionally HIGH (> threshold), but Price is DOWN.
# Interpretation: Strong hands are absorbing supply from panic sellers (Buying the dip).
# We compare current Close vs Yesterday's Close (shift(1)).
df['absorption_flag'] = np.where(
(df['del_zscore'] > threshold) & (df['close'] < df['close'].shift(1)),
True, False
)
# SCENARIO B: Speculative Churn
# Logic: Delivery is exceptionally LOW (< negative threshold), but Price is UP.
# Interpretation: Prices are rising on hollow volume (intraday speculation) without
# real ownership transfer. Often a "bull trap."
df['churn_flag'] = np.where(
(df['del_zscore'] < -threshold) & (df['close'] > df['close'].shift(1)),
True, False
)
# Return the diagnostic columns
return df[['close', 'delivery_pct', 'del_zscore', 'absorption_flag', 'churn_flag']]
# --- Execution Block ---
if __name__ == "__main__":
# 1. Create Mock Data: A 30-day trading history
dates = pd.date_range(start='2026-01-01', periods=30, freq='B')
# Create a baseline of "normal" behavior
# Price drifts up, Delivery hovers around 30%
closes = np.linspace(100, 110, 30)
del_pcts = np.random.normal(30, 2, 30) # Mean 30, Std 2
# Inject Anomaly 1: Structural Absorption (Day 28)
# Price Drops, Delivery Spikes to 60%
closes[28] = 105 # Drop from prev trend
del_pcts[28] = 60 # Massive accumulation
# Inject Anomaly 2: Speculative Churn (Day 29)
# Price Jumps, Delivery Drops to 10%
closes[29] = 112 # Jump
del_pcts[29] = 10 # Hollow move
# Assemble DataFrame
historical_bhavcopy_df = pd.DataFrame({
'date': dates,
'close': closes,
'delivery_pct': del_pcts
})
# 2. Run the Scrutinizer
results = delivery_anomaly_scrutinizer(historical_bhavcopy_df, threshold=2.0)
# 3. Output Analysis
print("-" * 80)
print("Market Regime Analysis: Delivery vs Price Action")
print("-" * 80)
# Filter to show only the interesting days (last 5 days)
# We drop NaNs generated by the rolling window for the display
display_data = results.tail(5)
print(f"{'Date':<12} | {'Price':<8} | {'Del %':<8} | {'Z-Score':<8} | {'Regime Detected'}")
print("-" * 80)
for i, row in display_data.iterrows():
# Map boolean flags to readable text
regime = "Normal"
if row['absorption_flag']:
regime = ">> ABSORPTION (Buy)"
elif row['churn_flag']:
regime = ">> CHURN (Trap)"
date_str = dates[i].strftime('%Y-%m-%d')
print(f"{date_str:<12} | {row['close']:<8.2f} | {row['delivery_pct']:<8.1f} | {row['del_zscore']:<8.2f} | {regime}")
print("-" * 80)
Methodological Definition: Delivery-Price Divergence Analysis
The code implements a heuristic algorithm to detect market anomalies by cross-referencing Delivery Statistics with Price Action. While price indicates the “direction” of the trend, delivery percentage indicates the “conviction” or “permanence” of that move. The algorithm identifies two specific structural behaviors: Absorption (high-quality buying during drops) and Churn (low-quality buying during rises).
Mathematical Specification
The system relies on the Z-Score () of the delivery percentage. We define the boolean anomaly flags (Absorption) and (Churn) as follows:
1. Structural Absorption (Hidden Accumulation)
2. Speculative Churn (Bull Trap)
Where:
- τ is the Threshold (e.g., 2.0).
- Pt is the Closing Price at time .
- ∧ denotes the Logical AND operator.
Step-by-Step Logic
- Step 1: Baseline Establishment
The system computes the 20-day Rolling Mean () and Standard Deviation () for the delivery percentage. This creates a dynamic “norm” that adapts to changing market conditions. - Step 2: Z-Score Normalization
The current day’s delivery is converted into a standard score. This standardizes the data, allowing the algorithm to detect statistically significant outliers regardless of the stock’s absolute liquidity. - Step 3: Signal Logic Application
The algorithm applies conditional logic to classify the day’s activity:- If delivery is statistically high but price is falling, it flags Absorption. This implies “Smart Money” is entering the stock and preventing a further collapse, often marking a support level.
- If delivery is statistically low but price is rising, it flags Churn. This implies the price rise is driven by intraday speculators who are not taking positions home, suggesting the rally is fragile (hollow).
The Structural Convergence Metric (SCM)
To quantify the alignment between price movement and delivery quality, we introduce the Structural Convergence Metric. This metric determines if the “Permanent” volume is supporting the current price trend. An SCM value approaching 1 indicates a high-conviction trend, while a value near 0 indicates a trend built entirely on intraday noise.
Mathematical Definition of Structural Convergence Metric (SCM)
SCM Formula and Variable Explanation
The SCM is essentially a Min-Max normalization of the delivery percentage over a lookback window n. It scales the current delivery ratio into a bounded oscillator between 0 and 1.
- SCMt (Resultant): The Structural Convergence Metric at time t.
- DPt (Variable): The current day’s Delivery Percentage.
- min(DP[t-n, t]) (Denominator/Numerator Term): The minimum delivery percentage recorded over the last n sessions.
- max(DP[t-n, t]) (Denominator Term): The maximum delivery percentage recorded over the last n sessions.
- n (Parameter): The lookback period (typically 20 or 50 days for structural stability).
- – (Operator): Subtraction used to find the range and the distance from the minimum.
- / (Operator): Division used to normalize the value.
Mandatory Technical Compilations: Structural Data Sources
Building an authoritative trading system requires access to pristine data sources. In the Indian market, delivery data is categorized as “Non-Display Data” when accessed via APIs, but it is available for public research through official exchange portals. A Python-centric workflow must prioritize these official channels to ensure the “Fetch” layer of the pipeline is accurate.
Official Sources and Python-Friendly APIs
To maintain high-quality structural statistics, data should be sourced directly from the primary infrastructure providers of the Indian Equity Market. TheUniBit recommends the following sources for building institutional-grade delivery pipelines:
- NSE India (Archives): The “Security-wise Delivery Position” report is the gold standard. It is released daily in
.datand.csvformats. - BSE India (Historical): Provides a dedicated “Delivery Position” search tool that can be automated using Python’s
seleniumorplaywrightfor web scraping. - NiftyIndices.com: Useful for fetching the delivery statistics of specific indices to compare a stock’s structural health against its benchmark.
Significant News Triggers and Delivery Behavior
While delivery is a structural statistic, it reacts sharply to specific market events. A Python engine should be programmed to monitor these “News Triggers” and analyze the subsequent delivery response:
- MSCI/FTSE Rebalancing: These events trigger massive “Permanent” volume as passive funds must settle their trades. A failure to see high delivery on rebalancing days suggests a liquidity mismatch.
- Block Deal Windows: Block deals are 100% delivery-based. When they occur, the Total Volume will spike, but the Delivery Percentage will also surge, preventing the “Dilution” of the structural signal.
- Earnings Releases: Post-earnings volatility with low delivery suggests retail “guessing,” whereas a post-earnings price break with high delivery suggests institutional “conviction.”
Python Logic: Detecting Block Deal Distortions
import pandas as pd
import numpy as np
def filter_block_deal_noise(df: pd.DataFrame, volume_threshold: float = 5.0) -> pd.DataFrame:
"""
Identifies 'Block Deal' events where volume spikes significantly, accompanied by
extremely high delivery percentages.
Block deals are pre-arranged trades between two parties (usually institutional).
While they represent massive volume, they are often 'noise' for retail trend analysis
because the stock simply moves from one long-term pocket to another without
affecting the open market supply/demand significantly in the short term.
Parameters:
----------
df (pd.DataFrame):
Input DataFrame containing historical market data.
Must include 'total_volume' and 'delivery_pct'.
Assumed to be sorted by date in ascending order.
volume_threshold (float):
The multiplier for the volume spike.
Default is 5.0 (Volume is 5x the 20-day average).
Returns:
-------
pd.DataFrame:
The DataFrame augmented with 'vol_spike' and 'is_block_deal' flags.
"""
# Work on a copy to preserve original data
df = df.copy()
# 1. Calculate the Volume Baseline (20-day Rolling Mean)
# We establish what 'normal' volume looks like.
rolling_vol_mean = df['total_volume'].rolling(window=20).mean()
# 2. Identify Volume Spikes
# A spike is flagged if current volume > (Baseline * Threshold)
# e.g., If Avg Volume = 1M and Threshold = 5, we look for days > 5M volume.
df['vol_spike'] = df['total_volume'] > (rolling_vol_mean * volume_threshold)
# 3. Identify Block Deal candidates
# A true Block Deal usually involves a massive transfer of ownership.
# Therefore, we look for:
# (a) A massive Volume Spike AND
# (b) Very high Delivery % (> 80%), indicating the shares were not squared off.
df['is_block_deal'] = (df['vol_spike']) & (df['delivery_pct'] > 80.0)
return df
# --- Execution Block ---
if __name__ == "__main__":
# 1. Create Dummy Data
# Scenario:
# - Normal trading for most days.
# - Day 25: A massive Block Deal (High Vol, High Delivery).
# - Day 28: A massive Speculative Spike (High Vol, Low Delivery) - NOT a block deal.
dates = pd.date_range(start='2026-01-01', periods=30, freq='B')
volumes = [100000] * 30 # Baseline volume: 100k
del_pcts = [40.0] * 30 # Baseline delivery: 40%
# Inject Block Deal (Day 25)
volumes[25] = 600000 # 6x Volume Spike
del_pcts[25] = 95.0 # 95% Delivery (Hand exchange)
# Inject Speculative Spike (Day 28)
volumes[28] = 700000 # 7x Volume Spike
del_pcts[28] = 15.0 # Low Delivery (Intraday war)
df_market = pd.DataFrame({
'date': dates,
'total_volume': volumes,
'delivery_pct': del_pcts
})
# 2. Run Filter
processed_df = filter_block_deal_noise(df_market, volume_threshold=5.0)
# 3. Output Results
print("-" * 75)
print("Block Deal Identification Log")
print("-" * 75)
print(f"{'Date':<12} | {'Volume':<10} | {'Del %':<8} | {'Spike?':<8} | {'Block Deal?'}")
print("-" * 75)
# Filter to show only events or significant days
# We look at the last 7 days to cover our injected anomalies
display_df = processed_df.tail(7)
for i, row in display_df.iterrows():
date_str = row['date'].strftime('%Y-%m-%d')
vol_str = f"{row['total_volume']:,}"
spike_str = "YES" if row['vol_spike'] else "No"
block_str = ">> YES <<" if row['is_block_deal'] else "No"
print(f"{date_str:<12} | {vol_str:<10} | {row['delivery_pct']:<8.1f} | {spike_str:<8} | {block_str}")
print("-" * 75)
Methodological Definition: Block Deal Filtration
The code provides a mechanism to distinguish between “organic” market volatility and pre-arranged “Block Deals.” Block deals often appear as massive statistical outliers in volume charts but usually represent a simple hand-change between institutional entities (e.g., Promoter to Mutual Fund) rather than active open-market discovery. Identifying and filtering these is crucial to prevent false positives in volatility trading algorithms.
Mathematical Specification
A trading session is classified as a Block Deal Event () if it satisfies two simultaneous conditions:
Where:
- Vt is the Total Traded Volume at time .
- μvol is the 20-day Rolling Mean of Volume (Baseline).
- k is the Volume Multiplier Threshold (Default: 5.0x).
- D% is the Delivery Percentage.
Step-by-Step Logic
- Step 1: Baseline Volume Establishment
The system calculates the average volume over the last 20 trading sessions. This establishes a “Quiet Period” baseline against which shocks are measured. - Step 2: Spike Detection
The algorithm flags any day where the volume exceeds the baseline by a factor of (e.g., 500% of normal volume). - Step 3: Delivery Confirmation
A high volume spike alone is ambiguous—it could be a panic sell-off or a speculative frenzy. However, if the Delivery Percentage is exceptionally high (e.g., >80%), it confirms that the shares were not traded back and forth (churned) but were moved into a demat account. This specific signature—Massive Volume + Massive Delivery—is the hallmark of a Block Deal.
Mandatory Technical Compilations: Structural Frameworks & missed Algorithms
In this final section, we consolidate the technical infrastructure required to sustain a high-quality delivery analytics platform. For a software development firm like TheUniBit, the focus is on maintaining data integrity while scaling the “Measure” workflow across thousands of instruments in the Indian Equity space.
I. Compiled Python Libraries for Delivery Analytics
The following libraries form the core stack for any institutional-grade delivery analysis tool. These tools facilitate everything from raw data ingestion to complex vector transformations.
- nsepython:
- Features: Specialized wrapper for NSE India; handles cookies and headers automatically.
- Key Functions:
nse_fetch_delivery(symbol),bhavcopy_full(). - Use Cases: Real-time retrieval of delivery statistics for Nifty 50 and Midcap stocks.
- jugaad-data:
- Features: Efficient historical data downloader for NSE/BSE.
- Key Functions:
stock_csv(symbol, from_date, to_date). - Use Cases: Building local archives of delivery data from 2005 to the present.
- Pandas:
- Features: High-performance data structures and time-series tools.
- Key Functions:
df.rolling(),df.pct_change(). - Use Cases: Vectorized calculation of Delivery Moving Averages and Structural Convergence Metrics.
- SQLAlchemy:
- Features: Python SQL toolkit and Object Relational Mapper (ORM).
- Key Functions:
session.bulk_insert_mappings(). - Use Cases: Persisting large-scale daily delivery reports into PostgreSQL.
- BeautifulSoup & Selenium:
- Features: Web scraping and browser automation.
- Key Functions:
soup.find_all('table'),driver.get(url). - Use Cases: Automating the extraction of BSE delivery reports which lack a public direct-download CSV endpoint.
II. Normalized Delivery Value (NDV)
While the Delivery Percentage is a relative ratio, the Normalized Delivery Value (NDV) provides the absolute Rupee-denominated intensity of the settlement. This is critical for identifying “Value Absorption” zones in large-cap stocks.
Mathematical Specification of Normalized Delivery Value (NDV)
Python Implementation of NDV
import locale
def calculate_normalized_delivery_value(deliverable_qty: int, vwap: float) -> float:
"""
Calculates the 'Normalized Delivery Value' (NDV) for a given security.
This metric represents the actual Rupee value of shares that were physically
transferred (settled) between accounts. Unlike 'Total Turnover', which includes
intraday speculation, NDV represents the real capital commitment (Investment Depth)
for the day.
Parameters:
----------
deliverable_qty (int):
The total quantity of shares marked for delivery (settlement).
Must be a non-negative integer.
vwap (float):
Volume Weighted Average Price. Using VWAP is methodologically superior
to 'Close Price' for this calculation because delivery trades occur
throughout the session at various price points, not just at the close.
Returns:
-------
float:
The total monetary value of the delivered shares (in the local currency units).
"""
# Validation: Ensure logic holds for real-world market data
if deliverable_qty < 0 or vwap < 0:
raise ValueError("Quantity and Price must be non-negative.")
# Core Calculation: Quantity * Weighted Price
# This derives the "Real Money Flow" into/out of the stock for the day.
ndv = deliverable_qty * vwap
return ndv
# --- Execution Block ---
if __name__ == "__main__":
# Example Scenario: High-value transactions in a blue-chip stock
qty_delivered = 500000 # 5 Lakh shares
avg_price = 2450.75 # VWAP in Rupees
try:
# Calculate the metric
total_value_delivered = calculate_normalized_delivery_value(qty_delivered, avg_price)
# Formatting for readability (Currency formatting)
# Attempt to set locale to Indian English for formatting (e.g., 10,00,000)
# If unavailable, defaults to standard formatting.
try:
locale.setlocale(locale.LC_ALL, 'en_IN')
except locale.Error:
locale.setlocale(locale.LC_ALL, '')
formatted_value = locale.format_string("%.2f", total_value_delivered, grouping=True)
# Output Results
print("-" * 50)
print("Capital Commitment Analysis (NDV)")
print("-" * 50)
print(f"Deliverable Quantity : {qty_delivered:,}")
print(f"VWAP (Session Avg) : {avg_price:,.2f}")
print("-" * 50)
print(f"Total Delivery Value : INR {formatted_value}")
print("-" * 50)
# Interpretation
# Converting to Crores (1 Crore = 10 Million) for easier reading
value_in_cr = total_value_delivered / 10000000
print(f"Economic Interpretation: \u20B9{value_in_cr:.2f} Crores of real capital changed hands.")
except Exception as e:
print(f"Calculation Error: {e}")
Methodological Definition: Normalized Delivery Value (NDV)
The code calculates the Investment Depth of a trading session. While standard “Turnover” aggregates both speculative (intraday) and investment (delivery) capital, the Normalized Delivery Value isolates the specific monetary worth of shares that actually underwent settlement. This is a superior metric for gauging the “conviction” of buyers, as it represents hard cash locked into positions rather than temporary leverage.
Mathematical Specification
The algorithm derives the monetary value by applying the Volume Weighted Average Price (VWAP) to the settled quantity.
Where:
- Qdel is the Deliverable Quantity (Shares moving to Demat).
- Pvwap is the Volume Weighted Average Price.
Step-by-Step Logic
- Step 1: Metric Selection (Price Proxy)
The function requires the VWAP rather than the Closing Price.
rationale: Delivery trades are executed continuously from market open to close. Using the Closing Price (which represents only the last 30 minutes of weighted trading) would introduce a “valuation bias,” especially on volatile days. VWAP provides the true average cost of acquisition for the entire session. - Step 2: Capital Computation
The system performs a scalar multiplication of the confirmed deliverable units by the average price per unit. This transforms a “Volume” metric (count of shares) into a “Value” metric (currency amount). - Step 3: Economic Interpretation
The resulting figure represents the Real Capital Flow. For example, a stock might have high volume but low delivery value (indicating high churn/speculation), whereas a high Delivery Value indicates strong institutional accumulation or distribution.
Formula Components and Variable Explanation
- NDVt (Resultant): The total monetary value of delivery settlement on day t.
- Vdel,t (Variable): The quantity of deliverable shares reported by the exchange.
- VWAPt (Variable): Volume Weighted Average Price. Using VWAP instead of “Close” provides a more accurate representation of the average entry price for those taking delivery.
- × (Operator): Multiplication used to derive the monetary value from quantity.
III. Multi-Session Delivery Aggregator (MSDA)
For stocks listed on both the NSE and BSE, looking at one exchange in isolation provides an incomplete structural picture. The MSDA algorithm combines data from both venues to produce a “True Market Delivery” statistic.
Python Algorithm: Multi-Session Delivery Aggregator
import sys
def aggregate_exchange_delivery(nse_data: dict, bse_data: dict) -> dict:
"""
Aggregates delivery statistics across NSE (National Stock Exchange) and
BSE (Bombay Stock Exchange) to generate a unified view of market liquidity.
In fragmented markets, looking at a single exchange often provides an incomplete
picture of supply and demand. This function consolidates the data to calculate
the 'True Market' delivery percentage.
Parameters:
----------
nse_data (dict):
Dictionary containing NSE stats. Must have keys: 'total_vol' and 'del_vol'.
bse_data (dict):
Dictionary containing BSE stats. Must have keys: 'total_vol' and 'del_vol'.
Returns:
-------
dict:
A dictionary containing:
- 'total_market_volume': Combined Gross Volume.
- 'total_market_delivery': Combined Settled Volume.
- 'true_del_percentage': The weighted delivery ratio across venues.
"""
# 1. Aggregate Gross Volumes (The Denominator)
# Summing the turnover from both liquidity centers.
unified_total_vol = nse_data['total_vol'] + bse_data['total_vol']
# 2. Aggregate Deliverable Volumes (The Numerator)
# Summing the actual shares marked for settlement.
unified_del_vol = nse_data['del_vol'] + bse_data['del_vol']
# 3. Calculate True Market Delivery Percentage
# Guard clause: Prevent division by zero if both exchanges have zero volume (e.g., Holidays/Halted).
if unified_total_vol == 0:
true_market_del_pct = 0.0
else:
# Formula: (Total Delivery / Total Volume) * 100
true_market_del_pct = (unified_del_vol / unified_total_vol) * 100
return {
'total_market_volume': unified_total_vol,
'total_market_delivery': unified_del_vol,
'true_del_percentage': round(true_market_del_pct, 2)
}
# --- Execution Block ---
if __name__ == "__main__":
# Example Scenario: A stock primarily traded on NSE but with significant block activity on BSE.
# NSE Data: Higher Volume, Lower Delivery Ratio (Typical for liquid stocks)
nse_stats = {
'total_vol': 10000000, # 1 Crore shares
'del_vol': 3000000 # 30% Delivery
}
# BSE Data: Lower Volume, Higher Delivery Ratio (Often used for delivery-based buying)
bse_stats = {
'total_vol': 500000, # 5 Lakh shares
'del_vol': 400000 # 80% Delivery
}
try:
# Run Aggregation
market_view = aggregate_exchange_delivery(nse_stats, bse_stats)
# Output Results
print("-" * 50)
print("Unified Market Liquidity View")
print("-" * 50)
print(f"NSE Volume : {nse_stats['total_vol']:,}")
print(f"BSE Volume : {bse_stats['total_vol']:,}")
print("-" * 50)
print(f"Total Market Volume : {market_view['total_market_volume']:,}")
print(f"Total Settled Qty : {market_view['total_market_delivery']:,}")
print("-" * 50)
print(f"True Market Delivery: {market_view['true_del_percentage']}%")
print("-" * 50)
# Analytic Insight
nse_pct = (nse_stats['del_vol'] / nse_stats['total_vol']) * 100
print(f"Comparison: NSE Standalone ({nse_pct:.2f}%) vs Unified ({market_view['true_del_percentage']}%)")
print("Note how the high-quality delivery on BSE slightly lifts the overall metric.")
except KeyError as e:
print(f"Data Error: Missing required key {e} in input dictionaries.")
Methodological Definition: Fragmented Liquidity Aggregation
The code addresses the issue of market fragmentation. In modern financial ecosystems, a single security often trades simultaneously on multiple venues (e.g., NSE and BSE in India). Analyzing a single exchange in isolation can lead to skewed conclusions, particularly when institutional “Block Deals” are routed to the secondary exchange (BSE) to minimize price impact. This algorithm consolidates the order books to derive a “True Market” delivery metric.
Mathematical Specification
The Unified Delivery Percentage () is calculated by summing the respective volume components before computing the ratio, rather than averaging the individual percentages.
Where:
- Vdel represents the Deliverable Quantity (Settled Volume).
- Vtotal represents the Total Traded Quantity (Gross Volume).
Step-by-Step Logic
- Step 1: Component Summation
The algorithm independently sums the numerators (delivery volume) and the denominators (total volume) from both exchanges. This is critical because a simple average of the two delivery percentages would be mathematically incorrect unless volumes were identical on both exchanges (Simpson’s Paradox). - Step 2: Unified Ratio Calculation
The total aggregated delivery volume is divided by the total aggregated market volume. This provides a weighted view, where the exchange with the higher volume naturally dominates the final percentage. - Step 3: Data Object Construction
The function encapsulates the raw aggregated totals along with the calculated percentage into a return object. This ensures that downstream analytics have access to both the relative ratio and the absolute liquidity figures.
IV. Curated Data Sources & Database Architecture
To implement the “Store” phase of the workflow, we define a robust data storage environment. The following structure is optimized for high-speed structural queries.
Database Structure (PostgreSQL Schema)
- Database Engine: PostgreSQL 14+ with TimescaleDB extension for time-series optimization.
- Table:
structural_metricstimestamp: TIMESTAMPTZ (Primary Key)symbol: VARCHAR(20) (Primary Key/Index)exchange: ENUM(‘NSE’, ‘BSE’, ‘TOTAL’)delivery_pct: DECIMAL(6,2)del_zscore: DECIMAL(6,4)ndv_cr: DECIMAL(15,2) (Value in Crores)
Official Data Sources & News Triggers
| Data Category | Source / API | News Trigger Impact |
|---|---|---|
| Cash Market Delivery | NSE India Archives (Daily) | Quarterly Results: High volatility + High Delivery = Conviction |
| Index Delivery Stats | Nifty Indices (Daily Reports) | Index Rebalancing: Massive “Permanent” volume spikes |
| Bulk/Block Deals | Exchange Disclosure API | Ownership Changes: 100% Delivery Trades |
Summary of Structural Impact
The “Delivery Percentage as a Structural Trading Statistic” moves the analyst away from the superficial “Volume” metric and into the realm of “Settlement Reality.” By utilizing the Python frameworks and mathematical definitions provided, traders can quantify the durability of market trends. For a high-performance software solution tailored to these Indian market nuances, partnering with an expert like TheUniBit ensures that your data pipelines are not only fast but structurally sound, enabling a true competitive edge in the equity markets.

