An Interactive Exploration of Basic Machine Learning and Deep Learning
Author: Dr. Sharad Maheshwari.
24.5.2025
Jump directly to an interactive website (Gmail and Gemini integration required)
https://g.co/gemini/share/f125880ecfe2
Introduction: Unlocking the Power of AI
The modern world is filled with examples of intelligent technology that often seem to operate by magic. Consider how a smartphone instantly recognizes a face to unlock itself, or how voice assistants like Alexa and Siri understand spoken commands and respond intelligently. These seemingly magical abilities are not supernatural phenomena; rather, they are the impressive results of two rapidly evolving fields in computer science: Machine Learning (ML) and Deep Learning (DL).1
To place Machine Learning (ML) and Deep Learning (DL) within the larger technology landscape, it’s useful to see Artificial Intelligence (AI) as the broad domain under which they fall.
-
Machine Learning is a fundamental subset of AI focused on creating systems that learn from data and improve over time without explicit programming.
-
Deep Learning is a specialized branch of ML that uses multi-layered neural networks to automatically uncover complex patterns in large, high-dimensional datasets.
Studying ML and DL is not just theoretical — it’s about enabling computers to analyze vast data volumes and make intelligent decisions autonomously. These technologies automate tasks requiring pattern recognition, which humans do well but struggle with at scale.
Understanding ML and DL fundamentals unlocks insights into how computers predict outcomes (like weather or stock prices), classify data (such as detecting spam or diagnosing diseases), and cluster similar entities (for example, segmenting customer behavior). Their applications span healthcare, finance, technology, and beyond — underscoring their transformative role in shaping the future.
Machine Learning - Teaching Computers to Learn
What is Machine Learning?
Machine Learning is essentially about teaching computers to learn from “experience,” much like how humans acquire new skills through practice and observation. Rather than coding rigid rules for every possible scenario, ML relies on feeding the system large amounts of data. The computer then independently uncovers patterns and relationships within that data.
Imagine teaching a child to recognize apples. Instead of listing strict rules like “red, round, with a stem,” you’d simply show thousands of images labeled “apple” or “not apple.” Over time, the child intuitively learns to identify apples by spotting recurring features such as shape, color, and size. Machine Learning works the same way, enabling computers to generalize and make decisions based on complex data patterns rather than fixed instructions.
How ML Learns: Data and Patterns
At its core, the process of ML involves "training" models by feeding them substantial quantities of data.2 This data serves as the experiential foundation from which the computer learns. The machine then employs a set of computational rules, known as an algorithm, to analyze this input data and derive inferences or predictions.
A crucial aspect of this process is that the effectiveness of an ML model is directly tied to the quantity of data it processes; the more data the machine analyzes, the more proficient and accurate it becomes at performing its designated task or making decisions.2
This inherent dependency on data highlights a fundamental principle: the performance of ML models is significantly enhanced by larger and more diverse datasets.
Learning from data: "Supervised vs Unsupervised learning"
Supervised Learning
In supervised learning, the model learns from a labeled dataset, where each input has a known output. The goal is to learn the relationship between inputs and outputs so the model can predict the outcome for new, unseen data.
-
Analogy: Like teaching a radiology trainee with annotated scans—e.g., “This is a lung nodule,” “This is normal.”
-
Common algorithms: Linear regression, logistic regression, support vector machines, random forests.
-
Healthcare examples:
-
Predicting malignancy based on imaging features.
-
Classifying chest X-rays as “normal” or “pneumonia.”
-
Unsupervised Learning
Here, the model works with unlabeled data and tries to find patterns, structures, or groupings without knowing the correct answer in advance.
-
Analogy: Like looking at 1,000 unlabelled CT scans and trying to group them based on similarities in anatomy or pathology.
-
Common algorithms: K-means clustering, hierarchical clustering, principal component analysis (PCA).
-
Healthcare examples:
-
Clustering patients based on symptoms or imaging findings to discover new disease subtypes.
-
Dimensionality reduction of radiomics features.
-
Bonus: Semi-Supervised & Reinforcement Learning
-
Semi-supervised: Uses a small labeled dataset and a large unlabeled one (e.g., partially annotated scans).
-
Reinforcement learning: Learns by trial-and-error, like optimizing radiation therapy plans over time
Types of ML Algorithms
Exactly. Machine Learning isn’t a one-size-fits-all tool; it’s more like a toolbox filled with specialized algorithms—each engineered to tackle distinct problem types effectively.
Understanding Linearity vs. Nonlinearity in ML
Some algorithms assume a straightforward, proportional relationship between inputs and outputs—this is called linearity. Others handle more complex, irregular relationships—this is nonlinearity.
To understand this better, let’s consider a real-world example. Imagine pricing a pizza. Initially, the price might increase steadily as the size increases—this is linear behavior. But you might need a larger oven or two people to handle it beyond a certain size, causing the cost to jump disproportionately—this is non-linear behavior.
Understanding whether a problem is linear or non-linear helps in selecting the right type of ML algorithm.
The following table introduces some common ML algorithms and provides examples of their practical applications, making these abstract concepts more concrete and understandable for beginners.
Deep Learning - Brain-Inspired Intelligence
What is Deep Learning?
This layered, collaborative processing enables Deep Learning to automatically extract increasingly abstract and complex features from raw data without manual feature engineering, which sets it apart from traditional machine learning.
Neural Networks: The Brain of Deep Learning
The inherent strength of Deep Learning models stems from their capacity to learn "hierarchical representations of data".3
Hierarchical representation means the model learns data in stages—from simple to complex. Imagine looking at a picture:
-
The first layer spots edges and colors (basic shapes).
The next layer combines those edges into patterns like corners or textures.
-
Deeper layers recognize parts of objects, like eyes or wheels.
-
The final layers understand the full object, like a face or a car.
Each layer builds on the previous one’s findings, creating a step-by-step understanding that turns raw data into meaningful concepts automatically. This means that instead of merely processing raw data, the network learns to decompose complex inputs into a series of increasingly abstract and meaningful features as the information passes through its layers. This automatic feature extraction is a significant advantage, as it removes the need for human experts to manually identify and select features, a process often required in traditional Machine Learning.
The Perceptron: A Single "Thinking Unit"
The perceptron stands as the simplest and earliest form of a neural network unit, conceived by Frank Rosenblatt in 1957. It can be conceptualized as a single "thinking unit" within a computer, drawing inspiration from how a single neuron in the brain might function. A perceptron receives multiple pieces of information, processes them, and renders a straightforward "yes" or "no" decision—a process known as binary classification.
This process involves several key components:
-
Inputs: These are the pieces of information the perceptron receives, typically numerical features. For example, deciding whether to carry an umbrella might involve inputs such as
- "Is it raining?" (1 for yes, 0 for no)
- "Do you have an umbrella?" (1 or 0)
- "Are you in a hurry?" (1 or 0)
-
Weights: Each input is assigned a weight reflecting its influence on the decision. In the umbrella scenario,
- "Rain" might have a high weight (e.g., 0.8) because it’s critical
- "Having an umbrella" might have a moderate weight (0.5)
- "Being in a hurry" a lower weight (0.2)
-
Bias: The bias acts as a baseline or default value that shifts the decision threshold, allowing the perceptron to lean toward a particular outcome even when inputs are ambiguous. For instance, a positive bias might nudge the model toward taking the umbrella by default.
-
Activation Function: After summing the weighted inputs and bias, the total value passes through an activation function, which determines the final output. A simple activation function outputs 1 ("yes") if the total surpasses a threshold, or 0 ("no") otherwise.
The perceptron’s simplicity makes it a foundational building block for more complex neural networks, but on its own, it can only solve linearly separable problems. This limitation spurred the development of multi-layer networks and advanced architectures that power today’s deep learning applications in medicine and beyond.
An Interaction with perceptron: https://g.co/gemini/share/720bea1eba93
Challenge: A fundamental limitation of a single perceptron is its inability to solve problems where the data is not linearly separable—that is, when a straight line cannot divide the classes perfectly. For instance, a perceptron might effectively classify cats versus dogs if their features can be separated by a simple boundary. However, for more complex tasks like recognizing handwritten digits, which involve subtle variations and nonlinear patterns, a single perceptron fails.
This limitation highlights the necessity for multi-layer neural networks, which introduce hidden layers capable of modeling complex, nonlinear relationships in data. Understanding this limitation provides the essential groundwork for exploring deeper and more powerful neural architectures.
Multi-Layer Neural Networks (MLPs): Stacking for Complexity
Multi-layer neural networks are structured into three primary types of layers:
● Input Layer: This is the initial layer where raw data is fed into the network. Examples include the individual pixel values of an image or the words that constitute a sentence.1
● Hidden Layers: These intermediate layers are where the primary "thinking" and processing occur. Each neuron within a hidden layer processes information received from the preceding layer, applying its unique set of weights, biases, and an activation function. The presence of multiple hidden layers is what grants these networks the ability to learn incredibly complex and abstract patterns from the data.1
● Output Layer: This final layer of the network produces the ultimate result of the network's computations, such as classifying an image as "this is a cat" or identifying a handwritten digit as "7".1
This hierarchical feature learning is the defining strength of deep learning. Unlike traditional machine learning, which relies heavily on manual feature engineering, deep networks autonomously discover nuanced representations, enabling them to capture intricate patterns that were previously inaccessible. By stacking layers, deep learning models transcend the limitations of single-layer perceptrons, unlocking a level of sophistication and accuracy critical for today's complex AI challenges.
The Building Blocks of Neural Networks
Why Weights and Biases Matter
Weights and biases are core components that allow neural networks to learn, adapt, and make accurate predictions.
● Weights:
Weights determine how much influence each input has on a neuron's output. Think of them as volume knobs—turning up or down the importance of different inputs. For example, in deciding whether to take an umbrella, “rain” might have a higher weight than “being in a hurry,” because it's more relevant to the outcome.
During training, these weights are continuously updated to improve predictions. A key insight is that changes in weight matter more when the input neuron is more active—a principle reminiscent of the biological rule: "neurons that fire together, wire together." This analogy bridges neural network training with how learning occurs in the brain.
● Biases:
Biases serve as adjustable thresholds. They allow a neuron to activate even if inputs are weak—or to remain inactive unless inputs are strong. You can think of a bias as a built-in lean: for example, a tendency to "take the umbrella just in case,” even without clear signs of rain.
Both weights and biases are the main learnable parameters in a neural network. They're tuned during training through optimization algorithms like gradient descent (which takes incremental steps to reduce errors) and backpropagation (which calculates how to adjust each parameter by tracing the error backward through the network).
Even without diving into the math, understanding the roles of weights and biases gives crucial insight into how neural networks learn and generalize.
The Role of Activation Functions
An activation function serves as a "gatekeeper" for each neuron within a neural network.1 After a neuron computes the weighted sum of its inputs and incorporates its bias, this sum is passed through the activation function. The function then determines whether the neuron should "fire" (i.e., produce an output value) and what that value should be. It essentially evaluates, "Is this piece of information significant enough to be passed on to the subsequent layer of the network?".1
Common types of activation functions include:
● Step Function: Used in very basic perceptrons, this function produces a binary output: 1 if the total input exceeds a certain threshold, and 0 otherwise.1
● Sigmoid: This function transforms any input number into a value ranging between 0 and 1, which can be particularly useful for representing probabilities.1
● ReLU (Rectified Linear Unit): A widely adopted and computationally efficient function in modern neural networks. It simply outputs the input value if it is positive, and 0 if it is negative or zero.1
The introduction of activation functions is critical because they inject "non-linearity" into the network, enabling it to learn complex patterns that linear models cannot.
Understanding Linearity vs. Non-Linearity in Learning Models
A linear model draws a straight line through data—it’s simple, fast, and easy to interpret. For basic relationships, like estimating a person’s height based on age, it can work reasonably well. But real-world data is rarely this straightforward. Patterns in images, language, or even medical signals are often complex and nonlinear.
This is where non-linearity becomes essential. Non-linear activation functions (like ReLU or sigmoid) give neural networks the power to model curves, twists, and layered dependencies in data. Without them, even a deep neural network would behave just like a single-layer linear model—essentially useless for complex tasks. It's like trying to sketch a circle with only straight lines—you lose the essence of the shape. Non-linearity gives networks the flexibility to capture the true form of real-world patterns.
Machine Learning vs. Deep Learning – What Really Sets Them Apart
Deep Learning is a specialized type of Machine Learning, but they operate differently at a core level.
Traditional Machine Learning (ML) methods—like decision trees or logistic regression—depend heavily on human-designed features. That means experts must manually decide which characteristics of the data are important. For instance, if you're building a model to detect cats in images, you might explicitly define "pointy ears" or "whiskers" as features.
Deep Learning (DL), by contrast, doesn't need this kind of manual work. It uses deep neural networks with many layers that can automatically learn the right features from raw data—like pixels, audio waveforms, or text. This removes the need for labor-intensive feature engineering and allows DL to outperform traditional ML in tasks involving images, speech, and natural language.
In short, traditional ML needs you to tell it what to look for, while DL figures it out on its own. That’s why DL is driving breakthroughs in AI today—it scales with complexity, learns rich patterns, and handles messy, high-dimensional data that older models can't manage.
The Generative Shift:
Generative AI
Generative AI (GenAI) represents a frontier in artificial intelligence where machines not only analyze data but create entirely new content—from text and images to music and even 3D models. At its core, GenAI is powered by deep learning architectures, particularly neural networks trained on massive datasets. These models learn the underlying patterns of data and use them to generate original outputs that resemble the training data, often with remarkable realism.
For example, ChatGPT can generate human-like conversation; DALL·E can create images from text prompts; and models like StyleGAN can synthesize photorealistic human faces. These technologies rely on sophisticated neural network architectures, such as transformers or generative adversarial networks (GANs), and represent a leap from learning to imitating human creativity.
Generative AI has transformative potential in healthcare, enabling:
-
Creation of synthetic medical images for training algorithms.
-
Natural language summarization of radiology reports.
-
Simulation of rare pathological findings to augment diagnostic datasets.
This generative capability marks a shift from simply recognizing patterns in data to producing data itself—opening up new ethical, practical, and scientific horizons.
Why Different Networks?
As researchers and engineers continued to push the boundaries of AI, they encountered an increasingly diverse array of complex problems. It became evident that a single, general-purpose neural network architecture would not be optimally efficient or effective for every type of data or task. This realization spurred the development of highly specialized neural networks, each meticulously designed to excel at particular challenges.1 This evolution demonstrates the remarkable ingenuity in the field of AI, highlighting that there is no universal "one-size-fits-all" solution; instead, tailored approaches are often necessary to achieve optimal performance for specific problem types, such as processing images versus sequences of text.
Computer Vision: Convolutional Neural Networks (CNNs)
Convolutional Neural Networks (CNNs) are a specialized type of neural network designed to process image data. Their defining feature is the convolutional layer, which applies small filters (or "kernels") that scan across an image to detect patterns. This process—called convolution—acts like a sliding window that picks up local features such as edges, corners, and textures.
Early layers might detect simple shapes (like lines or curves), while deeper layers combine these to recognize more complex features—like eyes, wheels, or tumors. By learning and stacking these patterns hierarchically, CNNs can interpret visual information with high precision.
CNNs are the foundation of many powerful visual AI applications, from facial recognition and medical imaging (e.g., identifying disease in X-rays or MRIs) to computer vision in self-driving cars, where they help machines "see" and understand their surroundings.
Memory Retainer: RNNs & LSTMs for Sequential Data
Recurrent Neural Networks (RNNs) are designed to work with sequential data, where the order of information matters—like sentences in a paragraph, spoken language, or stock prices over time. What sets RNNs apart is their ability to retain memory of previous inputs through internal loops. This makes them ideal for tasks that depend on context and continuity, such as speech recognition, text generation, and time series forecasting.
However, basic RNNs struggle with long-term memory—they tend to "forget" information from earlier in a long sequence. To solve this, Long Short-Term Memory (LSTM) networks were introduced.
LSTMs improve on RNNs by using specialized memory cells and gates that control what information is kept, updated, or forgotten. This enables them to remember important details over longer spans, making them highly effective for complex tasks like language translation, captioning videos, and understanding long conversations.
Language Guru: Transformers
Transformers or LLM are a powerful and modern type of neural network architecture that has transformed how machines understand and generate language. Originally developed for translation tasks (hence the name Transformers), they are now the foundation for most advanced language models, including ChatGPT and Google Bard.
At the heart of Transformers is a clever idea called the attention mechanism. Unlike older models that read text step-by-step or treat all words equally, attention allows the model to focus more on the most relevant words or parts of the input—much like how a human pays more attention to keywords when trying to understand a sentence.
Example: In the sentence “The doctor who examined the patient said she was stable,” attention helps the model figure out that “she” likely refers to “the patient” and not “the doctor.”
This ability to selectively “pay attention” makes Transformers excellent at handling long text, understanding context, and even reasoning.
That’s why many modern Large Language Models (LLMs) are built using Transformers. These models can:
-
Translate languages
-
Answer questions
-
Summarize medical records
-
Generate human-like text
What Does GPT Stand For?
GPT = Generative Pretrained Transformer
Generative: It creates new text.
Pretrained: It's trained on a large amount of text before fine-tuning.
Transformer: The architecture it’s built on.
To further demystify the terminology associated with Transformers, the following table defines key terms:
SLM = Small Language Model
-
Definition: A Small Language Model is a compact version of a language model (like GPT), typically trained on a smaller dataset with fewer parameters.
-
Goal: Efficient, fast, low-resource deployment — especially on edge devices (phones, browsers, IoT).
-
Size: Usually ranges from 5M to 1B parameters.
-
Focus: Speed, privacy, and offline utility.
Image Improvers & Creators: Autoencoders and GANs
Autoencoders: The Image Improvers
Autoencoders specialize in learning compact representations of data and then reconstructing it—often with enhancements. They’re powerful for tasks like image denoising, compression, and anomaly detection. Think of them as digital restorers: they take a blurry or noisy image and rebuild a cleaner version from within.
GANs: The Image Creators
Generative Adversarial Networks go a step further—they don’t just improve existing images, they generate entirely new ones (Synthetic). They're the engines behind AI-generated portraits, deepfakes, and virtual art. It contains two neural nets — a Generator that creates images and a Discriminator that judges if the image is real or fake. They battle each other in a zero-sum game until the Generator gets good at fooling the Discriminator.
The modern Image Creators (The Ghibli Rage)
Most modern AI image generators have shifted toward diffusion models (like DALL·E 2, Stable Diffusion, or Midjourney). These diffusion models can generate images in many styles—including styles inspired by Studio Ghibli. Start with pure noise, then gradually denoise step-by-step to form a clear image.Think of it like reversing a “noising” process — slowly turning random noise into a meaningful image over many iterations.The generation process is iterative and progressive.
RAG is an advanced AI architecture that combines a retriever and a generator:
-
Retriever: Fetches relevant documents from a knowledge base or database.
-
Generator: Uses the retrieved information to generate a coherent, grounded response (usually with a large language model like GPT).
In short: RAG = Search + Synthesize
🏥 Use Case in Healthcare: Clinical Decision Support
📌 Scenario:
A doctor queries an AI assistant:
"What are the latest treatment guidelines for HER2-negative breast cancer?"
Without RAG:
-
The LLM answers based on its training data, which might be outdated.
With RAG:
-
The AI retrieves real-time clinical guidelines from trusted sources like NCCN or PubMed.
-
Then, it generates an answer grounded in that current data.
Your AI Journey Begins!
The exploration of Machine Learning and Deep Learning reveals how these fields empower computers to learn from data and make intelligent decisions. Machine Learning involves teaching computers to identify patterns and make decisions based on examples, rather than explicit programming. Deep Learning, a more powerful and brain-inspired subset of ML, utilizes multi-layered neural networks to automatically learn complex, hierarchical features from raw data.
At the heart of these neural networks are fundamental building blocks: weights, biases, and activation functions. These components are crucial for enabling the networks to learn and to introduce the necessary non-linearity that allows them to solve real-world problems that cannot be described by simple straight lines. As the field progressed, specialized neural networks like Convolutional Neural Networks (CNNs) for images, Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTMs) for sequential data, and Transformers for language processing were developed to address unique challenges posed by different types of data and problems.
For those embarking on this journey, it is important to remember that while mathematics forms the bedrock of ML and DL, one does not need to be a math expert to begin. The most effective learning path often involves balancing theoretical understanding with hands-on projects and continuous practical application. Furthermore, as AI becomes increasingly integrated into daily life, it is crucial to always consider its ethical implications, particularly concerning algorithmic bias and data privacy. Being aware of these issues enables individuals to become responsible and critical users and developers of AI technologies.
The resources outlined in this guide—including websites, online courses, YouTube channels, Reddit communities, and introductory books—offer excellent starting points for deeper exploration. New learners are encouraged to experiment, build small projects, and engage with online communities to foster their understanding. Ultimately, grasping these fundamental concepts empowers individuals not only to utilize AI effectively but also to ask more informed questions about its capabilities and limitations, thereby contributing to its responsible development in the future.
Learning Resources:
http://abdominalimaging.blogspot.com/2025/05/machine-learning-top-learning-resources.html
Works cited
1. Basic ML_detailed.docx
2. Deep Learning vs. Machine Learning: A Beginner's Guide | Coursera, accessed on May 24, 2025, https://www.coursera.org/articles/ai-vs-deep-learning-vs-machine-learning-beginners-guide
3. What is Deep Learning? Applications & Examples | Google Cloud, accessed on May 24, 2025, https://cloud.google.com/discover/what-is-deep-learning
4. Backpropagation calculus - 3Blue1Brown, accessed on May 24, 2025, https://www.3blue1brown.com/lessons/backpropagation-calculus
5. Understanding 3blue1brown Vidoes on Neural Networks - Hongtao Hao, accessed on May 24, 2025, https://hongtaoh.com/en/2022/05/11/3blue1brown-nn/
6. Deep Learning Spring 2025: Syllabus and Schedule, accessed on May 24, 2025, https://www.bu.edu/eng/files/gravity_forms/22-d0e7dc02747342f2d864072606eb6f8a/2025/02/Deep-Learning-Spring-2025_-Syllabus-and-Schedule.pdf
7. 10 Free Machine Learning Programs for High School Students ..., accessed on May 24, 2025, https://www.veritasai.com/veritasaiblog/free-machine-learning-programs-for-high-school-students
8. AI For Everyone | Coursera, accessed on May 24, 2025, https://www.coursera.org/learn/ai-for-everyone
9. Machine Learning | Coursera, accessed on May 24, 2025, https://www.coursera.org/specializations/machine-learning-introduction
10. Machine learning education | TensorFlow, accessed on May 24, 2025, https://www.tensorflow.org/resources/learn-ml
11. Best YouTube Channels to learn Machine Learning [Updated] - GUVI, accessed on May 24, 2025, https://www.guvi.in/blog/best-youtube-channels-to-learn-machine-learning/
12. 3Blue1Brown: Neural Networks - Manning Publications, accessed on May 24, 2025, https://www.manning.com/livevideo/3blue1brown-neural-networks
13. 3Blue1Brown, accessed on May 24, 2025, https://www.3blue1brown.com/
14. Neural networks - YouTube, accessed on May 24, 2025, https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi
15. The StatQuest Illustrated Guide To Machine Learning: Starmer, Josh - Amazon.com, accessed on May 24, 2025, https://www.amazon.com/StatQuest-Illustrated-Guide-Machine-Learning/dp/B0BLM4TLPY
16. The StatQuest Illustrated Guide to Machine Learning!!!: Master the concepts, one full-color picture at a time, from the basics all the way to neural networks. BAM! by Josh Starmer | Goodreads, accessed on May 24, 2025, https://www.goodreads.com/book/show/63184362-the-statquest-illustrated-guide-to-machine-learning
17. The Statquest Illustrated Guide To Machine Learning - D-Marin, accessed on May 24, 2025, https://ftp.d-marin.com/pdf/download-manual/R3h5/download/the-statquest-illustrated-guide-to-machine-learning.pdf
18. The Statquest Illustrated Guide To Machine Learning Josh Starmer - Pay Commission, accessed on May 24, 2025, https://paycommission.gov.ie/fulldisplay/027401/TheStatquestIllustratedGuideToMachineLearningJoshStarmer.pdf
19. The StatQuest Illustrated Guide to Neural Networks and AI: Starmer, Josh - Amazon.com, accessed on May 24, 2025, https://www.amazon.com/StatQuest-Illustrated-Guide-Neural-Networks/dp/B0DQXYC14Q
20. The Essential Main Ideas of Neural Networks - YouTube, accessed on May 24, 2025, https://www.youtube.com/watch?v=CqOfi41LfDw
21. Beginner seeking Deep Learning study resources - ML background covered. : r/learnmachinelearning - Reddit, accessed on May 24, 2025, https://www.reddit.com/r/learnmachinelearning/comments/1kjych9/beginner_seeking_deep_learning_study_resources_ml/
22. Learn Machine Learning - Reddit, accessed on May 24, 2025, https://www.reddit.com/r/learnmachinelearning/
23. ML is math. You need math. You may not need to learn super advanced category theory(but you should), but at least Algebra and stat is required; ML is math. You can't avoid it, learn to enjoy it. Also states what you want to study in ML when asking for partners, - Reddit, accessed on May 24, 2025, https://www.reddit.com/r/learnmachinelearning/comments/1kqixp6/ml_is_math_you_need_math_you_may_not_need_to/
24. Aerospace Engineer learning ML : r/learnmachinelearning - Reddit, accessed on May 24, 2025, https://www.reddit.com/r/learnmachinelearning/comments/1kolmuw/aerospace_engineer_learning_ml/
25. How do I actually practice machine learning? : r/learnmachinelearning - Reddit, accessed on May 24, 2025, https://www.reddit.com/r/learnmachinelearning/comments/1f9bm18/how_do_i_actually_practice_machine_learning/
26. Deep learning help : r/learnmachinelearning - Reddit, accessed on May 24, 2025, https://www.reddit.com/r/learnmachinelearning/comments/1k4m3il/deep_learning_help/
27. What's the biggest mistake you made when learning machine learning, and what did you learn from it? : r/learnmachinelearning - Reddit, accessed on May 24, 2025, https://www.reddit.com/r/learnmachinelearning/comments/1irmzec/whats_the_biggest_mistake_you_made_when_learning/
28. The Ultimate Beginner Guide to Machine Learning : r/learnmachinelearning - Reddit, accessed on May 24, 2025, https://www.reddit.com/r/learnmachinelearning/comments/1fxqko8/the_ultimate_beginner_guide_to_machine_learning/
29. The Best Deep Learning Books for Beginners, accessed on May 24, 2025, https://www.fullstacko.com/blog/best-deep-learning-books/
30. Neural networks and deep learning, accessed on May 24, 2025, http://neuralnetworksanddeeplearning.com/
31. [Book Review] "The Hundred-Page Machine Learning Book" by Andriy Burkov | Kaggle, accessed on May 24, 2025, https://www.kaggle.com/discussions/general/575404
32. The Hundred-Page Machine Learning Book: Burkov, Andriy - Amazon.com, accessed on May 24, 2025, https://www.amazon.com/Hundred-Page-Machine-Learning-Book/dp/199957950X
33. Reviews - An Introduction to Statistical Learning, accessed on May 24, 2025, https://www.statlearning.com/reviews
34. An Introduction to Statistical Learning: with Applications in R (Springer Texts in Statistics), accessed on May 24, 2025, https://www.amazon.com/Introduction-Statistical-Learning-Applications-Statistics/dp/1461471370
35. MITx: Machine Learning with Python: from Linear Models to Deep Learning. - edX, accessed on May 24, 2025, https://www.edx.org/learn/machine-learning/massachusetts-institute-of-technology-machine-learning-with-python-from-linear-models-to-deep-learning
36. Introduction to Machine Learning - MIT Open Learning Library, accessed on May 24, 2025, https://openlearninglibrary.mit.edu/courses/course-v1:MITx+6.036+1T2019/about
37. CS230 Deep Learning - Stanford University, accessed on May 24, 2025, https://cs230.stanford.edu/
38. CS230 - Lecture 1 - Introduction to Deep Learning (Spring 2022), accessed on May 24, 2025, https://cs230.stanford.edu/syllabus/fall_2024/lecture_1.pdf
39. Bias in AI: Examples and 6 Ways to Fix it in 2025 - Research AIMultiple, accessed on May 24, 2025, https://research.aimultiple.com/ai-bias/
40. What is AI bias? Causes, effects, and mitigation strategies | SAP, accessed on May 24, 2025, https://www.sap.com/resources/what-is-ai-bias
41. What Is AI Bias? | IBM, accessed on May 24, 2025, https://www.ibm.com/think/topics/ai-bias
42. Navigating Data Privacy - MIT Sloan Teaching & Learning Technologies, accessed on May 24, 2025, https://mitsloanedtech.mit.edu/ai/policy/navigating-data-privacy/
43. 9 tips on how to protect student privacy when using AI tools - Chalkbeat, accessed on May 24, 2025, https://www.chalkbeat.org/2024/12/13/using-ai-and-protecting-student-privacy-is-hard-here-are-9-tips/
44. How to Protect Student Privacy When Using AI - TCEA Blog, accessed on May 24, 2025, https://blog.tcea.org/how-to-protect-student-privacy-when-using-ai/
45. Teaching AI: Ethical Considerations for High School Students - Learning.com, accessed on May 24, 2025, https://www.learning.com/blog/teaching-ai-ethical-considerations-high-school-students/
46. Ethical AI for Teaching and Learning - Center for Teaching Innovation - Cornell University, accessed on May 24, 2025, https://teaching.cornell.edu/generative-artificial-intelligence/ethical-ai-teaching-and-learning
Comments
Post a Comment