Machine Learning Workflow: From Data and Features to Predictions
Machine Learning Workflow: From Data and Features to Predictions

Machine Learning Workflow: From Data and Features to Predictions

Machine learning is not just sending data into an algorithm. A reproducible machine learning project usually follows a stable workflow: define the problem, inspect the data, build features, train a model, evaluate the result, and then use the model for prediction.

This article does not try to cover every algorithm. Instead, it explains the workflow from an engineering perspective. Once this structure is clear, linear regression, logistic regression, decision trees, and neural networks become much easier to place.

While reading, focus on three questions: what data enters the system, what transformations happen in the middle, and which metrics tell you whether the output is reliable.

1. Define the Problem

Before writing model code, answer this question:

Given which inputs, what output should the model predict?

Common problem types include:

  • Classification: predict a category, such as spam or not spam
  • Regression: predict a continuous value, such as price, demand, or temperature
  • Clustering: group data without labels, such as user segmentation
  • Ranking: order candidate results, such as search or recommendation output

If the problem is vague, you may train a model but still have no reliable way to judge whether it is useful.

2. Understand Each Column

For beginners, the most common data shape is a table:

sample  feature1  feature2  feature3  label
1       ...       ...       ...       A
2       ...       ...       ...       B
3       ...       ...       ...       A

The key concepts are:

  • Sample: usually one row of data
  • Feature: an input field used for prediction
  • Label: the known answer in supervised learning

Before writing code, understand what each column means, what unit it uses, what range it should have, and whether obvious bad values exist. Many machine learning failures come from misunderstood data rather than weak algorithms.

3. Split Training and Test Data

A model should not be judged only on data it used for training. To check whether it learned a general pattern, split the data:

  • Training set: used to fit model parameters
  • Test set: used to estimate behavior on new data

A common scikit-learn pattern is:

from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(
    X,
    y,
    test_size=0.2,
    random_state=42
)

random_state fixes the split, which makes experiments easier to reproduce.

4. Process Features

Models usually work with numbers, so raw data often needs conversion. Common feature processing steps include:

  • Encoding text categories as numbers
  • Handling missing values
  • Standardizing numeric features
  • Removing fields that are meaningless or leak the answer

Standardization is common for methods that are sensitive to numeric scale, such as logistic regression, K-means, and neural networks:

x_scaled = (x - mean) / std

It does not change the basic relationship between samples, but it puts different numeric features on more comparable scales.

5. Choose a Baseline Model

Do not start with the most complex model. First build a baseline:

  • For classification, try logistic regression or a decision tree
  • For regression, try linear regression
  • For clustering, try K-means

The baseline does not have to be the best model. It gives you a reference point. Later model changes, feature changes, and parameter changes should be compared against it.

6. Train the Model

In scikit-learn, training is usually expressed with a consistent method call:

model.fit(X_train, y_train)

Behind this call, the model adjusts internal parameters so predictions become closer to labels in the training data.

Different algorithms have different parameter meanings, but the goal is the same: find parameters that reduce mistakes on training data without merely memorizing it.

7. Predict and Evaluate

After training, predict on the test set:

y_pred = model.predict(X_test)

Then measure performance. Common classification metrics include:

  • Accuracy: the overall proportion of correct predictions
  • Precision: among predicted positives, how many are truly positive
  • Recall: among true positives, how many the model found
  • F1-score: a combined measure of precision and recall

Do not rely on one number. Accuracy can be misleading when classes are imbalanced.

8. The Whole Workflow

Combined, a minimal workflow looks like this:

# 1. Prepare X and y
# 2. Split training and test data
# 3. Process features
# 4. Train a model
# 5. Predict on the test set
# 6. Compute evaluation metrics

Real projects may add logging, cross-validation, model persistence, deployment, and monitoring. But even complex systems still depend on this core sequence.

9. A Good Practice Checklist

When practicing machine learning, write down answers to these questions:

  • What are the input features and target label?
  • How were training and test data split?
  • Which feature processing steps were used?
  • What baseline model was chosen?
  • Which metric was used, and why?
  • What do the model’s mistakes have in common?

If you can answer these questions, you are no longer just copying code. You are starting to analyze problems in the machine learning workflow.

10. Common Mistakes

When building a first machine learning project, beginners often run into these problems:

  • Processing the full dataset before splitting train and test data, which leaks test information into training
  • Skipping a baseline model and jumping directly to complex algorithms
  • Printing only accuracy without checking class balance or wrong predictions
  • Trusting column names without confirming what each field actually means

If you actively avoid these issues, even a small project becomes much easier to trust.

11. What to Read Next

The previous article is the AI Basics Learning Roadmap. After the full workflow is clear, continue with Model Training and Evaluation to understand loss functions, overfitting, and metrics.

Search questions

FAQ

Who is this article for?

This article is for readers who want a beginner-level guide to Machine Learning Workflow. It takes about 9 min and focuses on Machine Learning, Features, scikit-learn.

What should I read next?

The recommended next step is Model Training and Evaluation, so the article connects into a longer learning route instead of ending as an isolated note.

Does this article include runnable code or companion resources?

This article is primarily explanatory, but the related tutorials point to runnable examples, resources, and project pages.

How does this article fit into the larger site?

It is connected to the article context block, learning routes, resources, and project timeline so readers can move from concept to implementation.

Article context

AI Learning Project

A practical route from AI concepts to machine learning workflow, evaluation, neural networks, Python practice, handwritten digits, a CIFAR-10 CNN, adversarial traffic-defense notes, and AI security.

Level: Beginner Reading time: 9 min
  • Machine Learning
  • Features
  • scikit-learn
Other language version 机器学习完整流程:从数据、特征到模型预测
Share summary Machine Learning Workflow

Follow the practical path from data and features to training, prediction, and evaluation.

Download share card Open share center

Leave a Reply

Project timeline

Published posts

  1. AI Basics Learning Roadmap Separate AI, machine learning, and deep learning before going into implementation details.
  2. Machine Learning Workflow Follow the practical path from data and features to training, prediction, and evaluation.
  3. Model Training and Evaluation Understand loss, overfitting, train/test splits, accuracy, recall, and F1.
  4. Neural Network Basics Move from perceptrons to activation, forward propagation, backpropagation, and training loops.
  5. NLP Basics: Understanding Bag of Words and TF-IDF An introduction to the most fundamental text representation methods in NLP: Bag of Words (BoW) and TF-IDF.
  6. RNN Basics: Handling Sequential Data with Memory Understand the core concepts of Recurrent Neural Networks (RNN), the role of hidden states, and their application in NLP.
  7. Transformer Self-Attention Read Q/K/V, scaled dot-product attention, multi-head attention, and positional encoding before exploring LLM internals.
  8. Python AI Mini Practice Run a small scikit-learn classification task and read the experiment output.
  9. Handwritten Digit Dataset Basics Read train.csv, test.csv, labels, and the flattened 28 by 28 pixel layout before training the classifier.
  10. Handwritten Digit Softmax in C Follow the C implementation from logits and softmax probabilities to confusion matrices and submission export.
  11. Handwritten Digit Playground Notes See how the offline classifier was adapted into a browser demo with drawing input and probability output.
  12. CIFAR-10 Tiny CNN Tutorial in C Build and train a small convolutional neural network for CIFAR-10 image classification, then read its loss and accuracy output.
  13. Building a Tiny CIFAR-10 CNN in C: Convolution, Pooling, and Backpropagation A source-based walkthrough of cifar10_tiny_cnn.c, covering CIFAR-10 binary input, 3x3 convolution, ReLU, max pooling, fully connected logits, softmax, backpropagation, and local commands.
  14. High-Entropy Traffic Defense Notes Study encrypted metadata leaks, entropy, traffic classifiers, and a defensive Python chaffing prototype.
  15. AI Security Threat Modeling Build a defense map with NIST adversarial ML, MITRE ATLAS, and OWASP LLM risks.
  16. Adversarial Examples and Robust Evaluation Evaluate clean and perturbed accuracy with an FGSM-style digits experiment.
  17. Data Poisoning and Backdoor Defense Study poison rate, trigger behavior, attack success rate, and training pipeline controls.
  18. Model Privacy and Extraction Defense Measure membership inference signal and surrogate fidelity against a local toy model.
  19. LLM, RAG, and Agent Security Separate instructions from data and enforce tool permissions against indirect prompt injection.

Published resources

  1. Python AI practice code guide The article includes a runnable scikit-learn classification script.
  2. digit_softmax_classifier.c The C source for the handwritten digit softmax classifier.
  3. train.csv.zip Compressed handwritten digit training set with 42000 labeled samples.
  4. test.csv.zip Compressed handwritten digit test set with 28000 unlabeled samples.
  5. sample_submission.csv The official submission format example for checking the final output columns.
  6. submission.csv The prediction file generated by the current C project.
  7. digit-playground-model.json The compact softmax demo model and sample set used by the browser playground.
  8. digit-sample-grid.svg A small handwritten digit preview grid extracted from the training set.
  9. Handwritten digit project bundle Contains the source file, compressed datasets, submission files, browser model, and preview grid.
  10. cifar10_tiny_cnn.c source Single-file C tiny CNN with CIFAR-10 loading, convolution, pooling, softmax, and backpropagation.
  11. model_weights.bin sample weights Model weights generated by one local small-sample run.
  12. test_predictions.csv sample predictions Sample test prediction output from the CIFAR-10 tiny CNN.
  13. CNN project explanation PDF Companion explanation material for the CNN project.
  14. Virtual Mirror redacted code skeleton A redacted mld_chaffing_v2.py control-flow skeleton with secrets, node topology, and target lists removed.
  15. Virtual Mirror stress-test template A redacted CSV template for CPU, memory, peak threads, pulse rate, latency, and error measurements.
  16. Virtual Mirror classifier-evaluation template A CSV template for TP, FN, FP, TN, accuracy, precision, recall, F1, ROC-AUC, entropy, and JS divergence.
  17. Virtual Mirror resource notes Notes explaining why the public resources include only redacted code, test templates, and architecture context.
  18. AI Security Lab README Setup, safety boundaries, and quick-run commands for the AI Security series.
  19. AI Security Lab full bundle Includes safe toy scripts, result CSVs, risk register, attack-defense matrix, and architecture diagram.
  20. AI security risk register CSV risk register template for AI threat modeling and release review.
  21. AI attack-defense matrix Maps attack surface, toy demo, metric, and defensive control into one CSV table.
  22. AI Security Lab architecture diagram Shows threat modeling, robustness, data integrity, model privacy, and RAG guardrails.
  23. FGSM digits robustness script FGSM-style perturbation and accuracy-drop experiment for a local digits classifier.
  24. Data poisoning and backdoor toy script Demonstrates poison rate, trigger behavior, and attack success rate on digits.
  25. Model privacy and extraction toy script Outputs membership AUC, target accuracy, surrogate fidelity, and surrogate accuracy.
  26. RAG prompt injection guard toy script Uses a deterministic toy agent to demonstrate external-data demotion and tool-policy blocking.
  27. Deep Learning topic share card A 1200x630 SVG card for sharing the Deep Learning / CNN topic hub.
  28. Machine Learning From Scratch share card A 1200x630 SVG card for the K-means, Iris, and ML workflow topic hub.
  29. Student AI Projects share card A 1200x630 SVG card for handwritten digits, C classifiers, and browser demos.
  30. CNN convolution scan animation An 8-second Remotion animation showing how a 3x3 convolution kernel scans an input and builds a feature map.

Current route

  1. AI Basics Learning Roadmap Learning path step
  2. Machine Learning Workflow Learning path step
  3. Model Training and Evaluation Learning path step
  4. Neural Network Basics Learning path step
  5. Transformer Self-Attention Learning path step
  6. LLM Visualizer Learning path step
  7. Python AI Mini Practice Learning path step
  8. Handwritten Digit Dataset Basics Learning path step
  9. Handwritten Digit Softmax in C Learning path step
  10. Handwritten Digit Playground Notes Learning path step
  11. CIFAR-10 Tiny CNN Tutorial in C Learning path step
  12. High-Entropy Traffic Defense Notes Learning path step
  13. AI Security Threat Modeling Learning path step
  14. Adversarial Examples and Robust Evaluation Learning path step
  15. Data Poisoning and Backdoor Defense Learning path step
  16. Model Privacy and Extraction Defense Learning path step
  17. LLM, RAG, and Agent Security Learning path step

Next notes

  1. Add more image-classification and error-analysis cases
  2. Turn common metrics into a quick reference
  3. Add more AI security defense experiment notes