A New Taxonomy for AI
The Approach-Based AI Framework (ABAF)
A Functional Classification Beyond Machine Learning and Deep Learning
Author: Dr. Sharad Maheshwari, imagingsimplified@gmail.com
Abstract
The conventional distinction between machine learning (ML) and deep learning (DL) has become increasingly inadequate to describe the diversity of modern artificial intelligence (AI) systems. As hybrid architectures, multimodal models, and edge deployments blur algorithmic boundaries, practitioners—especially in applied domains such as healthcare—require a more functional understanding of AI.
This paper proposes the Approach-Based AI Framework (ABAF), a new taxonomy that classifies AI systems based on approach and operational requirement rather than architecture. ABAF introduces five primary categories—Rule-Guided, Representation-Driven, Hybrid Reasoning, Resource-Adaptive, and Autonomous Learning systems—supported by secondary modifiers for data dependency, learning mode, and deployment environment. The framework aims to unify education, deployment, and governance perspectives by explaining AI through its purpose, constraints, and learning mechanism rather than algorithmic depth.
The Problem: Why "ML vs. DL" Fails
To understand the need for ABAF, this section explains the limitations of the traditional "Machine Learning vs. Deep Learning" model. This binary view is no longer sufficient for today's complex AI landscape, especially for non-engineers who need to make critical decisions about AI adoption.
Eroded Boundaries
Modern architectures mix hand-crafted features (classic ML) with deep backbones (DL). The line is too blurry to be a useful distinction.
Wrong Focus
Algorithmic "depth" does not predict interpretability, autonomy, or deployment cost, which are the critical factors in practice.
Misleading Hierarchies
The terminology misleads non-engineers into thinking "deep" is always "better," when suitability depends entirely on the data, context, and goal.
ABAF's Conceptual Foundation
1. Functionalism
AI should be understood by *how it functions*—its inputs, dependencies, and logic—not its internal architecture.
2. Contextual Adaptability
The same algorithm behaves differently based on resource, data, and environmental constraints. Context is key.
3. Educational Clarity
Classification must focus on requirements and interpretability for interdisciplinary users like clinicians and regulators.
Explore the Framework
The heart of ABAF is its classification based on *approach* and *requirement*. This section provides an interactive explorer for the five core categories and their key modifiers. Click on each tab to learn about a category.
The Five Core Categories
📚 1. Rule-Guided Systems
These systems operate using explicit rules, logic, and "if-then" statements defined by human experts. Their "intelligence" comes from a pre-programmed knowledge base, not from learning patterns from data.
What is the base algorithm (ML or DL)?
This category generally does not use modern ML or DL. Its "algorithm" is the set of rules itself. This is the classic, original form of AI, often called an "Expert System."
Key algorithm families:
- Decision Trees: A series of "if-then-else" questions that guide the user to a specific answer.
- Clinical Scoring Models: Simple, weighted calculators where the "rules" are mathematical weights defined by medical literature.
- Client-Side Logic: The rules are often built directly into the application's code (e.g., using JavaScript), allowing them to run entirely offline.
Defining Question
Is its logic based on pre-defined, explicit human knowledge?
Key Requirement
Human expertise to create and validate the rules.
Typical Applications
Clinical decision support calculators, guideline enforcement, protocol planners.
Real-World Healthcare Examples
1. Contrast Planner (Author's Example)
A web-based tool (contrastplanner.blogspot.com) that helps clinicians plan safe CT contrast media injections.
- How it Works: Directly encodes published guidelines (the "rules") from medical societies.
- Logic: A built-in JavaScript decision tree applies expert rules to clinician inputs (e.g., eGFR, scan type).
- Output: Provides a specific, actionable protocol with no internet connection or API calls.
2. Clinical Risk Calculators (e.g., CHADS2-VASc)
Simple medical calculators (like MDCalc) used to assess risk for conditions like stroke.
- How it Works: The "AI" is just a simple, published scoring system.
- Rule: The entire logic is: "C (CHF) = 1 point, H (Hypertension) = 1 point, A (Age > 75) = 2 points..."
- Output: Adds the points and presents the final score and corresponding risk from a pre-defined table.
🔄 2. Representation-Driven Systems
These systems learn their own complex patterns (or "representations") directly from large amounts of raw, unstructured data. This is the category that contains most modern Deep Learning (DL) models.
What is the base algorithm (ML or DL)?
This category is almost exclusively Deep Learning (DL). The "representation" is the set of features the model *learns* on its own, as opposed to features a human *tells* it to look for.
Key algorithm families:
- Convolutional Neural Networks (CNNs): The workhorse of computer vision. They learn visual features (edges, shapes) directly from pixels.
- Transformers (e.g., GPT, BERT): The foundation of modern NLP. They learn complex relationships between words in massive text datasets.
- Generative Adversarial Networks (GANs): Used to generate new, realistic data (like synthetic medical images).
Defining Question
Does it learn directly from raw, unstructured data?
Key Requirement
Large datasets, high compute, pattern discovery.
Typical Applications
Medical imaging (e.g., segmentation), NLP (e.g., text classification), sensor fusion.
Real-World Healthcare Examples
1. CT Tumor Segmentation (Vision AI)
An AI system that automatically draws a contour around a tumor on a CT scan.
- How it Works: A CNN (like a U-Net) is trained on thousands of CT scans where radiologists have already outlined tumors.
- Learned Representation: The model learns a complex *statistical representation* of pixel patterns (textures, densities) correlated with the expert's outlines.
- Output: A "segmentation mask" (a new image highlighting the predicted tumor).
2. Radiology Report Summarization (NLP AI)
An AI that reads a long, free-text radiology report and generates a concise "Impressions" section.
- How it Works: A Transformer model is trained on millions of pairs of full reports and their "Impressions" sections.
- Learned Representation: The model learns the statistical relationships between medical terms and how to "condense" finding sentences into conclusion sentences.
- Output: A new, short block of generated text.
🤝 3. Hybrid Reasoning Systems
These systems combine learned patterns (like in Representation-Driven models) with explicit knowledge or rules (like in Rule-Guided systems). They aim to get the best of both worlds: perception and reasoning.
What is the base algorithm (ML or DL)?
This category is, by definition, a combination. It typically uses a powerful Representation-Driven (DL) model as its "engine" and connects it to a Rule-Guided (ML or database) system as its "guardrail."
Common techniques used:
- Retrieval-Augmented Generation (RAG): An LLM (DL-based) is "augmented" by "retrieving" factual information from an external database *before* it generates an answer.
- Neurosymbolic AI: Explicitly combines neural networks (for learning from raw data) with symbolic logic engines (for reasoning over facts).
Defining Question
Does it combine learned patterns with explicit rules?
Key Requirement
Integration of data (DL) and knowledge (rules) systems.
Typical Applications
Grounded LLM assistants (RAG), multimodal diagnosis, explainable AI (XAI).
Real-World Healthcare Examples
1. Oncology Guideline Assistant (RAG)
A clinician asks an AI, "What is the NCCN-recomended first-line treatment for Stage II HER2+ breast cancer?"
- Representation-Driven Part (DL): An LLM (like GPT-4) understands the *intent* of the user's question.
- Rule-Guided Part (Database): The system performs a RAG search, querying a private, up-to-date database of NCCN guidelines (the "rule book").
- Hybrid Reasoning: The LLM synthesizes the *retrieved facts* into a clear, trustworthy answer, preventing "hallucinations."
2. Multimodal Diagnostic AI (Image + Text)
An AI system's goal is to find the most likely diagnosis by looking at *both* a chest X-ray *and* the patient's EHR text.
- Representation-Driven Parts (DL): A "Vision Model" analyzes the X-ray, and an "LLM" reads the EHR text.
- Hybrid Reasoning: A "reasoning layer" (the hybrid part) combines these two outputs. It reasons that:
(Visual Pattern A) + (Textual Symptoms B) = High Probability of Pneumonia.
📱 4. Resource-Adaptive Systems
This category classifies AI not by its logic, but by its *deployment constraints*. These systems are designed to run efficiently and privately, often with limited compute power, memory, or network access.
What is the base algorithm (ML or DL)?
The base algorithm can be either ML or DL. The *defining feature* is that this base algorithm has been heavily optimized (e.g., compressed, pruned, or quantized) to run "on the edge."
Key techniques used:
- Federated Learning (FL): The AI model is *sent to the data* (e.g., to multiple hospitals) to be trained locally. Only the model's learnings are sent back, not the private patient data.
- Model Quantization & Pruning: Techniques to shrink a large DL model into a "TinyML" version that can run on low-power hardware (like a smartphone).
- On-Device AI: Running the entire AI model within the device's own processor (e.g., Apple's Neural Engine).
Defining Question
Is it designed to run under specific resource constraints?
Key Requirement
Privacy (e.g., Federated Learning), low-power (e.g., TinyML), low-latency.
Typical Applications
Edge AI, on-device inference, federated learning, wearable sensors.
Real-World Healthcare Examples
1. Privacy-Preserving Tumor Model (Federated Learning)
A consortium of hospitals wants to build a powerful tumor segmentation AI (a CNN) without any hospital sharing its private patient scans.
- How it Works: A "global model" is sent to Hospital A to train locally. The *updated model* (not the data) is then sent to Hospital B, and so on.
- Resource Adaptation: The system adapts to the *resource constraint* of data privacy rules, enabling collaboration without centralization.
2. On-Device Ultrasound Triage (TinyML)
A portable, handheld ultrasound probe that can instantly tell the operator "Normal" or "Abnormal" in real-time, even with no Wi-Fi.
- How it Works: A large DL model is "quantized" (shrunk) to be small enough to run on the probe's tiny internal chip.
- Resource Adaptation: The AI adapts to the *resource constraints* of low power, no internet, and the need for immediate (low-latency) results.
🤖 5. Autonomous Learning Systems
These systems are designed to learn and adapt *autonomously* in their environment. They modify their own behavior based on real-time feedback (rewards, corrections, or new data) to achieve a goal.
What is the base algorithm (ML or DL)?
This category is defined by its *learning method*, almost always Reinforcement Learning (RL) or Continual Learning. It uses a DL model as its "brain" but adds a feedback loop that allows that brain to change *after* deployment.
Key techniques used:
- Reinforcement Learning (RL): The AI ("agent") learns by trial and error. It takes an "action" and receives a "reward" or "penalty."
- Reinforcement Learning from Human Feedback (RLHF): How models like ChatGPT are "aligned." A human ranks generated answers, and an RL model learns from these rankings.
- Continual Learning: The AI learns from new data as it arrives, without forgetting what it learned in the past.
Defining Question
Can it evolve via feedback or interaction?
Key Requirement
Reinforcement learning, continual adaptation, interactive feedback loops.
Typical Applications
Personalized triage bots, robotic assistance, adaptive user interfaces.
Real-World Healthcare Examples
1. Adaptive Patient Triage Bot (RLHF)
An AI chatbot that helps patients check symptoms. After each interaction, the patient gives a "thumbs up" or "thumbs down."
- How it Works: The chatbot is a Representation-Driven LLM. The "thumbs up/down" is a human feedback signal (a reward/penalty).
- Autonomous Learning: The underlying RLHF model uses this feedback to *autonomously* update the LLM's parameters, making it more likely to give helpful answers over time.
2. Autonomous Robotic Surgery Assistant (RL)
(Largely experimental) A robotic arm's goal is to autonomously place a suture.
- How it Works: In a simulation, the AI "agent" (the robot's controller) tries millions of tiny movements.
- Feedback Loop (RL): It receives a "reward" for actions that move the needle closer to the target and a "penalty" for moving it away.
- Autonomous Learning: It *autonomously* discovers the optimal policy (sequence of movements) to perform the task.
Secondary Modifiers
To add more detail, ABAF uses modifiers to refine the classification. Click each dimension to see its subtypes.
Reflects whether the model relies on structured data, large-scale unstructured data, or a mix.
- Low: e.g., Simple rule-based systems.
- High: e.g., Large vision or language models.
- Mixed: e.g., Multimodal systems.
Indicates if the model can evolve or self-correct over time.
- Static: Trained once and deployed.
- Incremental: Can be updated with new data batches.
- Interactive: Learns continuously from user feedback (see Autonomous Learning).
Defines the degree of human vs. emergent knowledge in the system.
- Expert: Rules/knowledge provided by humans.
- Learned: Patterns discovered from data.
- Combined: A mix of both (see Hybrid Reasoning).
Captures the resource and privacy context of where the AI runs.
- Cloud: Runs in a remote data center.
- Edge / On-Device: Runs locally on a device (e.g., phone, scanner).
- Hybrid: A combination of both.
Fingerprinting an AI Model with ABAF
Beyond simple classification, ABAF can be used as a quantitative "fingerprinting" tool. By scoring a model along its core categories and modifiers, we can create a rich, multidimensional profile that instantly communicates its behavior and needs. This moves beyond a single label to a complete operational signature.
We use the 4-point scoring system you provided (0 = Not Applicable, 1 = Mild, 2 = Moderate, 3 = Significant) to create two distinct "signatures":
1. The Behavioral Signature
This 5-point signature scores the *function* of the AI, based on the core categories. It answers: "What does this AI *do*?"
- Rule-Guided (RG) [Score 0-3]
- Representation-Driven (RD) [Score 0-3]
- Hybrid Reasoning (HR) [Score 0-3]
- Resource-Adaptive (RA) [Score 0-3]
- Autonomous Learning (AL) [Score 0-3]
2. The Contextual Signature
This 4-point signature scores the *requirements* of the AI, based on the modifiers. It answers: "What does this AI *need*?"
- Data Dependency [Score 0-3]
- Learning Mode [Score 0-3]
- Knowledge Source [Score 0-3]
- Deployment Environment [Score 0-3]
Example Fingerprint: Federated Radiology Network
Behavioral Signature
- Rule-Guided: 1 (Mild) - Uses simple rules for data handling.
- Representation-Driven: 3 (Significant) - Core model is a CNN.
- Hybrid Reasoning: 0 (Not Applicable)
- Resource-Adaptive: 3 (Significant) - Its *defining* trait is adapting to privacy constraints.
- Autonomous Learning: 0 (Not Applicable) - The model is static once trained.
Contextual Signature
- Data Dependency: 3 (Significant) - Needs large, diverse data from many hospitals.
- Learning Mode: 1 (Mild) - Static or incrementally updated, not interactive.
- Knowledge Source: 2 (Moderate) - Mostly 'Learned' from data.
- Deployment Environment: 3 (Significant) - A complex 'Hybrid' of edge (hospital) and cloud.
ABAF in Practice: Healthcare Examples
How does ABAF reclassify real-world AI systems? This section provides practical examples from healthcare, contrasting the vague, old labels with the more informative and functional ABAF classification.
Radiomics-based Risk Prediction
Traditional Label: ML
ABAF Class: Rule-Guided
Interpretive Benefit:
Provides clear, feature-based reasoning that a clinician can verify (e.g., risk is high *because* of these 5 specific features).
CT Tumor Segmentation
Traditional Label: DL
ABAF Class: Representation-Driven
Interpretive Benefit:
Clearly states that it learns complex visual features end-to-end from raw pixel data, setting expectations about its data appetite.
Oncology Guideline Assistant
Traditional Label: DL (LLM)
ABAF Class: Hybrid Reasoning
Interpretive Benefit:
Shows it combines 'learned' language perception with an 'expert' knowledge base (e.g., NCCN guidelines) for factual grounding.
Federated X-ray Triage
Traditional Label: DL
ABAF Class: Resource-Adaptive
Interpretive Benefit:
Highlights that its key feature is the *deployment method* (federated learning) which enables privacy-preserving training across hospitals.
Feedback-Driven Report Summarizer
Traditional Label: DL
ABAF Class: Autonomous Learning
Interpretive Benefit:
Makes it clear this model is not static; it is designed to *continuously improve* its summaries based on corrections from radiologists.
Future Impact & Conclusion
Beyond classification, ABAF provides a foundation for improving AI literacy, regulation, and research. This section outlines the framework's future potential and key takeaways, moving the conversation from "What algorithm?" to "What does it do and what does it need?"
Future Directions
- › AI Literacy: Serve as a foundation for AI curricula in medicine, business, and management schools.
- › Regulation: Link AI categories to specific audit and explainability requirements (e.g., Rule-Guided systems may have different standards than Representation-Driven).
- › Research Standards: Encourage authors to describe models by their requirements and approach, not just algorithmic labels.
- › Industry Benchmarking: Help organizations choose the right tool for the job (e.g., interpretable vs. resource-efficient vs. adaptive AI).
Conclusion: The ABAF Advantage
The Approach-Based AI Framework (ABAF) reframes our understanding of AI. It moves the conversation from:
“What algorithm do you use?”
to the more practical and important question:
“What does this AI system need, how does it learn, and how does it adapt?”
By classifying AI through functional logic and operational requirements, ABAF builds a crucial bridge between technical sophistication and practical usability. This is essential for fields like healthcare, where clarity, accountability, and adaptability define success.
References
The ABAF concept is built on extensive research into machine learning, federated learning, neurosymbolic AI, and explainability. This section provides the full list of references cited in the original paper.
- 1. L. von Rüden et al., Informed Machine Learning – A Taxonomy and Survey...
- 2. T. Baltrušaitis et al., Multimodal Machine Learning: A Survey and Taxonomy...
- 3. L. Capogrosso et al., A Machine Learning-Oriented Survey on Tiny Machine Learning...
- 4. S. Mohseni et al., Taxonomy of Machine Learning Safety: A Survey and Primer...
- 5. D. DeLong et al., Neurosymbolic AI for Reasoning Over Knowledge Graphs: A Survey...
- 6. S. Mosaiyebzadeh et al., Privacy-enhancing Technologies in Federated Learning...
- 7. L. Wang et al., A Comprehensive Survey of Continual Learning...
- 8. W. Samek et al., Explainable Artificial Intelligence: Understanding, Visualizing...
- 9. M. Chen et al., Resource-Efficient Deep Learning: A Survey...
- 10. Y. Yang et al., Edge Intelligence: Paving the Last Mile of Artificial Intelligence...
- 11. J. Li et al., On-Device Machine Learning: A Survey...
- 12. M. T. Ribeiro et al., Why Should I Trust You? Explaining the Predictions...
- 13. L. Caruana et al., Intelligible Models for Healthcare...
- 14. S. Thrun, Lifelong Learning Algorithms...
- 15. E. Nasarian et al., Designing interpretable ML systems to enhance trust...
- 16. F. Mosaiyebzadeh et al., Privacy-enhancing Technologies in federated learning...
Comments
Post a Comment