-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathTransferLearning.txt
More file actions
101 lines (53 loc) · 3.32 KB
/
TransferLearning.txt
File metadata and controls
101 lines (53 loc) · 3.32 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
Step 1–6: Data loading, preprocessing, transfer learning, training the model, saving it.
Step 7: Evaluating performance → confusion matrix & classification report.
Outcome: You can now measure accuracy, precision, recall, and F1 for fruit disease detection.
Next Steps (LO3–LO4 focus)
Step 8 – Optimization & Fine-tuning
Adjust hyperparameters (epochs, batch size, learning rate).
Try unfreezing some base CNN layers for fine-tuning.
Show how performance changes (accuracy improves, loss decreases).
Step 9 – Avoiding Overfitting/Underfitting
Demonstrate use of data augmentation, dropout, early stopping.
Show training vs validation curves (accuracy/loss) to discuss overfitting/underfitting.
Step 10 – Final Testing
Test the model on a few unseen images (from dataset or new samples).
Display predictions visually with labels (e.g., “Predicted: Rotten Apple, Actual: Rotten Apple”).
Step 11 – Documentation & Conclusion
Write explanation (human tone) about what you achieved, model accuracy, challenges (overfitting), and how you optimized it.
Step 8 In detail:
Alright 🚀 let’s move into Step 8 – Optimization & Fine-tuning
Since you already trained the model once with image size 224×224, now we’ll try to improve accuracy and generalization by:
Unfreezing some base layers of MobileNetV2 (so it learns more features from your dataset).
Using a lower learning rate (so fine-tuning doesn’t destroy pretrained weights).
Training for a few extra epochs (on top of your previous training).
🔹 What this does
Keeps earlier layers frozen (generic features like edges, shapes).
Trains the last 50 layers of MobileNetV2 to adapt to your fruit disease dataset.
Uses a very small learning rate (1e-5) to make subtle adjustments.
EarlyStopping will stop training if validation loss stops improving.
#Saving the model
model.save("fruit_vegetable_disease_model_224.h5")
9 sept/2025
✅ Where you stand right now
Data Loading & Preprocessing
Dataset imported and resized properly (224x224).
Train/validation split applied (using ImageDataGenerator).
Normalization done.
✔ This matches the assignment requirement of preparing and preprocessing the dataset.
Transfer Learning (MobileNetV2)
You’ve trained with frozen base layers.
Accuracy reached ~85–90% on validation within just 5 epochs.
✔ This fulfills the assignment part of using transfer learning.
Fine-Tuning
You unfroze the last ~50 layers, retrained with a smaller learning rate.
Accuracy/loss graphs show consistent improvement.
✔ This shows model optimization, exactly as required.
Model Evaluation
Classification report generated (precision, recall, f1-score).
Confusion matrix plotted (fixing the blank issue now).
✔ This covers the evaluation metrics part of the brief.
Misclassified Samples
Output: 652 misclassified out of 3829 validation/test images.
That is ~17% error rate, meaning ~83% correct classification.
✅ This is plausible and correct, especially since some fruit classes (like Apple vs Mango) can look visually similar when rotten.
The image output you got for misclassified examples (True label vs Predicted label shown) is also correct — this is an excellent addition for your assignment because it demonstrates critical analysis of errors.