Python AI Mini Practice: A Classification Task with scikit-learn
Python AI Mini Practice: A Classification Task with scikit-learn

Python AI Mini Practice: A Classification Task with scikit-learn

The previous articles covered AI concepts, the machine learning workflow, model training and evaluation, and neural network basics. This article runs a small end-to-end practice project: a binary classification task with Python and scikit-learn.

The example uses the breast cancer dataset built into scikit-learn, so no external data file is required. The goal is not to chase the highest score. The goal is to walk through loading data, splitting data, standardizing features, training, predicting, and evaluating.

Note: this dataset is used here only for machine learning practice. It should not be used for medical decisions or real diagnosis. The article focuses on the classification workflow, not medical conclusions.

1. Prepare the Environment

Create a virtual environment and install the dependency:

python3 -m venv .venv
source .venv/bin/activate
pip install scikit-learn

This example uses only scikit-learn, not a deep learning framework. That keeps the focus on the basic machine learning workflow.

2. Complete Code

The following script can be run directly:

from sklearn.datasets import load_breast_cancer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler


def main():
    dataset = load_breast_cancer()
    X = dataset.data
    y = dataset.target

    X_train, X_test, y_train, y_test = train_test_split(
        X,
        y,
        test_size=0.2,
        random_state=42,
        stratify=y,
    )

    model = Pipeline(
        steps=[
            ("scaler", StandardScaler()),
            ("classifier", LogisticRegression(max_iter=500)),
        ]
    )

    model.fit(X_train, y_train)
    y_pred = model.predict(X_test)

    print("Accuracy:", accuracy_score(y_test, y_pred))
    print("Confusion matrix:")
    print(confusion_matrix(y_test, y_pred))
    print("Classification report:")
    print(classification_report(y_test, y_pred, target_names=dataset.target_names))


if __name__ == "__main__":
    main()

Save it as ai_classification_demo.py and run:

python ai_classification_demo.py

If dependency import feels slow the first time, confirm that the virtual environment is active and run python -c "import sklearn; print(sklearn.__version__)" to check that scikit-learn is installed.

3. What the Dataset Contains

load_breast_cancer() returns a binary classification dataset. Each sample contains numeric features, and the label indicates which class the sample belongs to.

In the script:

  • X is the feature matrix, with one row per sample
  • y is the label array, with one label per sample
  • dataset.target_names contains the class names

The dataset is already prepared as numeric features, which makes it useful for practicing classification basics.

4. Why Split Training and Test Data?

The script uses train_test_split():

X_train, X_test, y_train, y_test = train_test_split(
    X,
    y,
    test_size=0.2,
    random_state=42,
    stratify=y,
)

test_size=0.2 means 20% of the data is reserved for testing. stratify=y tries to preserve the class ratio after the split, which is useful for classification.

If you evaluate only on training data, the model may have memorized training examples instead of learning a pattern that generalizes.

5. Why Use Pipeline?

The code uses Pipeline instead of manually standardizing first and training later:

model = Pipeline(
    steps=[
        ("scaler", StandardScaler()),
        ("classifier", LogisticRegression(max_iter=500)),
    ]
)

This has two benefits:

  • Standardization and classification stay in one reproducible workflow
  • The test set uses scaling parameters learned only from the training set, which avoids data leakage

Data leakage is a common beginner mistake. If you standardize the full dataset before splitting, information from the test set has already influenced training.

6. Why Logistic Regression?

Logistic regression is a classic baseline for classification. It is fast, stable, and easier to explain than many more complex models.

This example does not start with a neural network because running the full workflow is more important at this stage. Once every line in this script is clear, replacing the classifier with a random forest, support vector machine, or neural network becomes more meaningful.

7. How to Read the Evaluation

The script prints three kinds of results:

  • Accuracy: the overall proportion of correct predictions
  • confusion_matrix: which classes were predicted incorrectly
  • classification_report: precision, recall, F1-score, and related metrics

Even if accuracy is high, do not stop there. Check the confusion matrix to see which class causes mistakes, then compare precision and recall to the requirements of the problem.

8. What to Try Next

After the script runs, try a few small experiments:

  • Change test_size to 0.3 and see whether results stay stable
  • Remove StandardScaler and compare the metrics
  • Replace LogisticRegression with RandomForestClassifier
  • Print dataset.feature_names and read what each feature means
  • Find the indexes of wrong predictions and inspect those samples

The key to learning AI foundations is to make each example explainable. In this practice project, you did not just run a classifier. You walked through a complete machine learning workflow.

9. Add This Practice to Your Notes

After running the code, record these details:

  • How many samples, features, and classes the dataset contains
  • How many samples are in the training and test sets
  • The accuracy, precision, recall, and F1-score
  • Which type of mistake appears more often in the confusion matrix
  • What changes when you remove standardization or switch models

These notes are more useful than saving only one accuracy value because they help you explain the experiment, not just preserve the result.

10. Series Review

This article turns the previous concepts into code. To revisit the foundations, start again from the AI Basics Learning Roadmap, or return to the Blog page for the full series.

Search questions

FAQ

Who is this article for?

This article is for readers who want a practice-level guide to Python AI Mini Practice. It takes about 10 min and focuses on Python, scikit-learn, Classification.

What should I read next?

Use the related tutorials and project links below the article to continue through the closest topic hub.

Does this article include runnable code or companion resources?

Yes. Use the run notes, resource cards, and download links on the page to reproduce the example or inspect the companion files.

How does this article fit into the larger site?

It is connected to the article context block, learning routes, resources, and project timeline so readers can move from concept to implementation.

Article context

AI Learning Project

A practical route from AI concepts to machine learning workflow, evaluation, neural networks, Python practice, handwritten digits, a CIFAR-10 CNN, adversarial traffic-defense notes, and AI security.

Level: Practice Reading time: 10 min
  • Python
  • scikit-learn
  • Classification

Companion resources

Leave a Reply

Project timeline

Published posts

  1. AI Basics Learning Roadmap Separate AI, machine learning, and deep learning before going into implementation details.
  2. Machine Learning Workflow Follow the practical path from data and features to training, prediction, and evaluation.
  3. Model Training and Evaluation Understand loss, overfitting, train/test splits, accuracy, recall, and F1.
  4. Neural Network Basics Move from perceptrons to activation, forward propagation, backpropagation, and training loops.
  5. NLP Basics: Understanding Bag of Words and TF-IDF An introduction to the most fundamental text representation methods in NLP: Bag of Words (BoW) and TF-IDF.
  6. RNN Basics: Handling Sequential Data with Memory Understand the core concepts of Recurrent Neural Networks (RNN), the role of hidden states, and their application in NLP.
  7. Transformer Self-Attention Read Q/K/V, scaled dot-product attention, multi-head attention, and positional encoding before exploring LLM internals.
  8. Python AI Mini Practice Run a small scikit-learn classification task and read the experiment output.
  9. Handwritten Digit Dataset Basics Read train.csv, test.csv, labels, and the flattened 28 by 28 pixel layout before training the classifier.
  10. Handwritten Digit Softmax in C Follow the C implementation from logits and softmax probabilities to confusion matrices and submission export.
  11. Handwritten Digit Playground Notes See how the offline classifier was adapted into a browser demo with drawing input and probability output.
  12. CIFAR-10 Tiny CNN Tutorial in C Build and train a small convolutional neural network for CIFAR-10 image classification, then read its loss and accuracy output.
  13. Building a Tiny CIFAR-10 CNN in C: Convolution, Pooling, and Backpropagation A source-based walkthrough of cifar10_tiny_cnn.c, covering CIFAR-10 binary input, 3x3 convolution, ReLU, max pooling, fully connected logits, softmax, backpropagation, and local commands.
  14. High-Entropy Traffic Defense Notes Study encrypted metadata leaks, entropy, traffic classifiers, and a defensive Python chaffing prototype.
  15. AI Security Threat Modeling Build a defense map with NIST adversarial ML, MITRE ATLAS, and OWASP LLM risks.
  16. Adversarial Examples and Robust Evaluation Evaluate clean and perturbed accuracy with an FGSM-style digits experiment.
  17. Data Poisoning and Backdoor Defense Study poison rate, trigger behavior, attack success rate, and training pipeline controls.
  18. Model Privacy and Extraction Defense Measure membership inference signal and surrogate fidelity against a local toy model.
  19. LLM, RAG, and Agent Security Separate instructions from data and enforce tool permissions against indirect prompt injection.

Published resources

  1. Python AI practice code guide The article includes a runnable scikit-learn classification script.
  2. digit_softmax_classifier.c The C source for the handwritten digit softmax classifier.
  3. train.csv.zip Compressed handwritten digit training set with 42000 labeled samples.
  4. test.csv.zip Compressed handwritten digit test set with 28000 unlabeled samples.
  5. sample_submission.csv The official submission format example for checking the final output columns.
  6. submission.csv The prediction file generated by the current C project.
  7. digit-playground-model.json The compact softmax demo model and sample set used by the browser playground.
  8. digit-sample-grid.svg A small handwritten digit preview grid extracted from the training set.
  9. Handwritten digit project bundle Contains the source file, compressed datasets, submission files, browser model, and preview grid.
  10. cifar10_tiny_cnn.c source Single-file C tiny CNN with CIFAR-10 loading, convolution, pooling, softmax, and backpropagation.
  11. model_weights.bin sample weights Model weights generated by one local small-sample run.
  12. test_predictions.csv sample predictions Sample test prediction output from the CIFAR-10 tiny CNN.
  13. CNN project explanation PDF Companion explanation material for the CNN project.
  14. Virtual Mirror redacted code skeleton A redacted mld_chaffing_v2.py control-flow skeleton with secrets, node topology, and target lists removed.
  15. Virtual Mirror stress-test template A redacted CSV template for CPU, memory, peak threads, pulse rate, latency, and error measurements.
  16. Virtual Mirror classifier-evaluation template A CSV template for TP, FN, FP, TN, accuracy, precision, recall, F1, ROC-AUC, entropy, and JS divergence.
  17. Virtual Mirror resource notes Notes explaining why the public resources include only redacted code, test templates, and architecture context.
  18. AI Security Lab README Setup, safety boundaries, and quick-run commands for the AI Security series.
  19. AI Security Lab full bundle Includes safe toy scripts, result CSVs, risk register, attack-defense matrix, and architecture diagram.
  20. AI security risk register CSV risk register template for AI threat modeling and release review.
  21. AI attack-defense matrix Maps attack surface, toy demo, metric, and defensive control into one CSV table.
  22. AI Security Lab architecture diagram Shows threat modeling, robustness, data integrity, model privacy, and RAG guardrails.
  23. FGSM digits robustness script FGSM-style perturbation and accuracy-drop experiment for a local digits classifier.
  24. Data poisoning and backdoor toy script Demonstrates poison rate, trigger behavior, and attack success rate on digits.
  25. Model privacy and extraction toy script Outputs membership AUC, target accuracy, surrogate fidelity, and surrogate accuracy.
  26. RAG prompt injection guard toy script Uses a deterministic toy agent to demonstrate external-data demotion and tool-policy blocking.
  27. Deep Learning topic share card A 1200x630 SVG card for sharing the Deep Learning / CNN topic hub.
  28. Machine Learning From Scratch share card A 1200x630 SVG card for the K-means, Iris, and ML workflow topic hub.
  29. Student AI Projects share card A 1200x630 SVG card for handwritten digits, C classifiers, and browser demos.
  30. CNN convolution scan animation An 8-second Remotion animation showing how a 3x3 convolution kernel scans an input and builds a feature map.

Current route

  1. AI Basics Learning Roadmap Learning path step
  2. Machine Learning Workflow Learning path step
  3. Model Training and Evaluation Learning path step
  4. Neural Network Basics Learning path step
  5. Transformer Self-Attention Learning path step
  6. LLM Visualizer Learning path step
  7. Python AI Mini Practice Learning path step
  8. Handwritten Digit Dataset Basics Learning path step
  9. Handwritten Digit Softmax in C Learning path step
  10. Handwritten Digit Playground Notes Learning path step
  11. CIFAR-10 Tiny CNN Tutorial in C Learning path step
  12. High-Entropy Traffic Defense Notes Learning path step
  13. AI Security Threat Modeling Learning path step
  14. Adversarial Examples and Robust Evaluation Learning path step
  15. Data Poisoning and Backdoor Defense Learning path step
  16. Model Privacy and Extraction Defense Learning path step
  17. LLM, RAG, and Agent Security Learning path step

Next notes

  1. Add more image-classification and error-analysis cases
  2. Turn common metrics into a quick reference
  3. Add more AI security defense experiment notes