
What Will You Learn?
By the end of this lesson, you will be able to:
- Understand what deployment means in the AI Project Cycle
- Know the steps involved in deploying an AI solution
- Recognize challenges in real-world AI deployment
- Understand the importance of monitoring and maintenance
- See examples of deployed AI systems in action
Imagine you’ve invented an amazing recipe. You’ve perfected it in your kitchen, and everyone who’s tasted it says it’s delicious. But here’s the question: does that mean you’re ready to open a restaurant?
Not quite. You’d need to set up a kitchen that can handle hundreds of orders. Train staff. Get licenses. Market it. Handle customer feedback. Deal with busy nights and slow nights.
The same applies to AI. Building a model that works in testing is just the beginning. Deployment is about making that model work for real users, in the real world, day after day.
This is the final stage of the AI Project Cycle, and often the most challenging because now it will be tested in real world.
What is AI Deployment?
Deployment is the sixth and final stage of the AI Project Cycle where we:
- Put the trained AI model into a production environment
- Make it accessible to real users
- Integrate it with existing systems
- Monitor its performance continuously
- Update and maintain it over time
Think of deployment as the difference between a prototype car and a car you can actually buy and drive. The prototype might work, but the production version needs to be safe, reliable, and ready for everyday use.
What to Do Before Deployment: The Checklist
Before deploying an AI model, you should verify:
| Checklist Item | Why It Matters |
|---|---|
| Evaluation passed | Model meets performance requirements |
| Test data was separate from training data | Model wasn’t just memorizing |
| Edge cases tested | Model handles unusual situations |
| Error handling exists | System doesn’t crash on bad inputs |
| Documentation complete | Others can understand and maintain it |
| Ethical review done | No harmful biases or privacy issues |
| Legal compliance | Meets regulations and standards |
| User acceptance | Target users have tested and approved |
Steps in AI Deployment
AI deployment happens in five stages — environment preparation, integration, deployment, performance monitoring, and maintenance and updates.
Let’s dive into each step.
Step 1: Prepare the Environment
The development environment (your computer or test setup) is different from the production environment (where real users access it).
| Aspect | Development | Production |
|---|---|---|
| Users | Just you/team | Thousands or millions |
| Availability | When you’re working | 24/7 |
| Errors | Acceptable, used for learning | Must be handled gracefully |
| Performance | “Good enough” | Must be fast and reliable |
Preparing the environment means setting up servers, databases, and infrastructure that can handle real-world demands.
Step 2: Integrate with Existing Systems
AI rarely works alone. It needs to connect with multiple tools and systems.
| Integration | Example |
|---|---|
| User interfaces | Mobile app, website, chatbot |
| Databases | To access and store data |
| Other software | Email systems, payment systems |
| Hardware | Cameras, sensors, IoT devices |
Example: A spam filter AI must integrate with the email server to receive emails, make predictions, and move spam to the spam folder.
Step 3: Deploy the Model
The actual release can happen in different ways.
| Deployment Type | Description | When to Use |
|---|---|---|
| Big Bang | Replace old system completely | When new system is better manifold |
| Phased | Deploy to one group, then expand | Lower risk, can catch problems early |
| A/B Testing | Some users get AI, others don’t | To compare performance |
| Shadow Mode | AI runs but doesn’t affect outcomes | To test without risk |
Step 4: Monitor Performance
After deployment, your task is not over. You need to keep a tab on its performance so that you can identify what is not working and improve the model accordingly.
| What to Monitor | Why |
|---|---|
| Accuracy | Is it still making correct predictions? |
| Speed | Are responses fast enough? |
| Usage patterns | Are users actually using it? |
| Errors | What’s going wrong? |
| Feedback | What are users saying? |
| Fairness | Is it treating all groups equally? |
Step 5: Maintain and Update
Based on the data you gather from the previous step, you should:
- Retrain the AI model periodically with new data
- Fix bugs and issues as they come up
- Update for new requirements
- Scale to more users or increase scope if usage grows
- Retire if the model is no longer needed
Challenges in Real-World Deployment
Challenge 1: Data Drift
The real world changes over time. What worked yesterday might not work tomorrow. That means your training data is no more relevant. This is called data drift.
So, you can say that data drift is the change in the data patterns that an AI model was trained on compared to the data it receives later. When the new data behaves differently, the model’s predictions can become less accurate. Detecting drift helps keep the model reliable over time.
Example: A spam filter trained in 2020 might struggle in 2025 because spammers have learned new tricks.
Solution: Regularly retrain with new data. Monitor for declining performance.
Challenge 2: Edge Cases
Edge cases are rare situations that happen at the extreme ends of normal data or conditions. These cases don’t follow the usual patterns, so AI models may struggle to handle them correctly. Identifying edge cases helps improve the model’s safety and reliability.
Example: A face recognition system trained on front-facing photos fails when someone wears a mask or turns sideways.
Solution: Comprehensive testing. Plan for graceful failures.
Challenge 3: Scale Issues
What works for 10 users might crash with 10,000.
Scaling issues occur when an AI system works well with small amounts of data or users but struggles as the workload increases. As more people use the system or as the data grows, the model may slow down, require more computing power, or become less accurate.
This challenge matters because an AI solution must handle real-world demand reliably, not just perform well in controlled tests.
Example: A recommendation system that responds in 0.1 seconds for 100 users takes 10 seconds when 10,000 users access it simultaneously.
Solution: Load testing before deployment. Scalable infrastructure.
Challenge 4: User Adoption
User adoption is the challenge of getting people to actually use and trust an AI system once it is deployed. Many users hesitate because they may not understand how the system works, may not trust its decisions, or may fear it will replace their jobs.
When adoption is low, even a well-designed AI system cannot deliver its full value, so organizations must focus on training, clear communication, and building confidence in the tool.
Example: Doctors ignore AI suggestions because they don’t trust how the system has reached its decisions.
Solution: User training. Explainable AI. Gradual introduction.
Challenge 5: Ethical and Legal Issues
Ethical and legal issues arise when deploying AI because the system’s decisions can affect people in sensitive ways.
An AI model might show bias, misuse personal data, or make decisions that are hard to explain, which creates fairness and privacy concerns. Laws and regulations also require organizations to handle data responsibly and ensure the AI behaves safely, which makes deployment more complex and demanding.
We will talk about them in more detail in the next lesson, but here is an example.
Example: A hiring AI that discriminates against certain groups. Who is responsible?
Solution: Ethical review. Bias testing. Compliance with regulations.
Types of Deployment Platforms
AI can be deployed in different ways:
| Platform | Description | Example |
|---|---|---|
| Cloud-based | AI runs on internet servers | Google Translate, ChatGPT |
| On-premise | AI runs on organization’s own servers | Hospital’s patient data AI |
| Edge/Device | AI runs directly on devices | Face ID on your phone |
| Hybrid | Combination of above | Smart assistant (basic on device, complex on cloud) |
Choosing the Right Platform
| Factor | Cloud | On-Premise | Edge |
|---|---|---|---|
| Data privacy | Data leaves device | Data stays internal | Data never leaves device |
| Internet needed | Yes | Maybe | No |
| Processing power | Unlimited | Limited by hardware | Very limited |
| Cost model | Pay per use | High upfront | Per device |
| Updates | Instant for all | Requires maintenance | Need to update each device |
Real-World Case Study: Aravind Eye Hospital Deployment
Let’s see how the diabetic retinopathy AI was deployed.
The Challenge
- 71 vision centers across rural areas
- Limited eye specialists
- Patients who can’t travel to city hospitals
- Need for quick screening results
The Deployment
| Aspect | How It Was Done |
|---|---|
| Platform | Cloud-based with local image capture |
| Integration | Connected to existing hospital systems |
| Workflow | Technician takes image → Uploads → AI analyzes → Result in minutes |
| Training | Local technicians trained to use the system |
| Monitoring | Regular accuracy checks against expert opinions |
Results After Deployment
- Thousands of patients screened in areas with no eye doctors
- Early detection of disease that would have led to blindness
- Doctors can focus on treatment instead of screening
- Patients get results in minutes instead of weeks
Lessons Learned
- Design for actual users — Village technicians, not AI experts
- Handle infrastructure limitations — Sometimes internet is slow
- Build trust gradually — Started with verification by doctors
- Keep humans in the loop — AI assists, doesn’t replace doctors
Activity: Plan Your Deployment
You’ve built an AI that helps students find relevant study materials based on their syllabus and weak areas. Plan its deployment:
| Question | Your Plan |
|---|---|
| Who are the users? | |
| What platform (cloud/device)? | |
| What systems must it integrate with? | |
| What could go wrong? | |
| How will you monitor success? |
(Suggested answers in Answer Key)
The AI Project Cycle: Complete Picture
Now that we’ve covered all stages, here’s the complete cycle:
| Stage | Key Question | Key Output |
|---|---|---|
| 1. Problem Scoping | What are we solving? | Clear problem statement |
| 2. Data Acquisition | What data do we need? | Collected dataset |
| 3. Data Exploration | What does the data tell us? | Clean, understood data |
| 4. Modelling | How will AI solve this? | Trained model |
| 5. Evaluation | Does it work? | Validated performance |
| 6. Deployment | How do users access it? | Working real-world system |
Remember: The cycle is iterative. After deployment, you may discover issues that send you back to earlier stages. That’s normal and healthy!
Quick Recap
- Deployment is the final stage where AI is put into real-world use.
- Before deploying, verify: evaluation passed, edge cases tested, documentation complete, ethical review done.
- Deployment steps: prepare environment, integrate systems, deploy model, monitor performance, maintain and update.
- Common challenges: data drift, edge cases, scale issues, user adoption, ethical concerns.
- Deployment platforms include cloud-based, on-premise, edge/device, and hybrid.
- After deployment, continuous monitoring and regular updates are essential.
- The Aravind Eye Hospital deployment shows successful real-world AI helping thousands of patients.
- Deployment is not the end — it’s the beginning of ongoing improvement.
Next topic: AI Ethics and Bias: Understanding Privacy, Fairness and Human Rights in AI
Previous topic: How to Evaluate AI Models: True Positive, False Positive and Model Accuracy Explained
EXERCISES
A. Fill in the Blanks
- Deployment is the ____________________ stage of the AI Project Cycle.
- Before deployment, a thorough ____________________ review should be done to check for biases.
- When real-world data changes over time, this is called data ____________________.
- ____________________ testing involves some users getting AI while others don’t.
- AI running directly on devices like phones is called ____________________ deployment.
- The Aravind AI was deployed in ____________________ vision centers across Tamil Nadu.
- After deployment, ____________________ is essential to track if the AI is still working correctly.
- Cloud-based deployment requires an ____________________ connection.
- Deployment that gradually expands from one group to more is called ____________________ deployment.
- The AI Project Cycle is ____________________, meaning you can go back to earlier stages after deployment.
B. Multiple Choice Questions
1. Which stage of the AI Project Cycle is Deployment?
(a) Fourth
(b) Fifth
(c) Sixth
(d) Seventh
2. Before deployment, which should NOT be on the checklist?
(a) Evaluation passed
(b) Documentation complete
(c) Skip testing edge cases
(d) Ethical review done
3. Data drift refers to:
(a) Data moving to different servers
(b) Real-world data patterns changing over time
(c) Data being deleted
(d) Users not providing data
4. A/B testing in deployment means:
(a) Testing all features
(b) Some users get AI, others get old system
(c) Testing only on team members
(d) No testing at all
5. Edge deployment means:
(a) AI runs in the cloud
(b) AI runs on the organization’s servers
(c) AI runs directly on devices
(d) AI runs nowhere
6. What is NOT a challenge in deployment?
(a) Data drift
(b) Scale issues
(c) Perfect accuracy
(d) User adoption
7. In the Aravind deployment, who operates the imaging equipment?
(a) AI only
(b) Patients themselves
(c) Local technicians
(d) Doctors from cities
8. After deployment, monitoring includes checking:
(a) Only accuracy
(b) Only user feedback
(c) Accuracy, speed, usage, errors, and feedback
(d) Nothing — it’s already working
9. Shadow mode deployment means:
(a) AI works in the dark
(b) AI runs but doesn’t affect actual outcomes
(c) AI completely replaces old system
(d) AI is hidden from users
10. Why is user adoption a challenge?
(a) Users always trust AI
(b) Users may not understand or trust the AI
(c) AI always works perfectly
(d) Users don’t need training
C. True or False
- Deployment is the final stage of the AI Project Cycle. (__)
- After deployment, no maintenance is needed. (__)
- Data drift means real-world patterns change over time. (__)
- Edge deployment requires constant internet connection. (__)
- Monitoring should be done continuously after deployment. (__)
- Cloud-based deployment keeps all data on the user’s device. (__)
- The Aravind AI replaced all doctors in rural areas. (__)
- A/B testing helps compare AI performance with the old system. (__)
- User training is unnecessary for successful deployment. (__)
- The AI Project Cycle can return to earlier stages after deployment. (__)
D. Define the Following (30-40 words each)
- Deployment (in AI)
- Data Drift
- Edge Deployment
- Cloud-based Deployment
- A/B Testing
- Shadow Mode
- Monitoring (post-deployment)
E. Very Short Answer Questions (40-50 words each)
- What is Deployment and why is it important?
- What should be checked before deploying an AI model?
- What is data drift and how can it be handled?
- Explain the difference between cloud-based and edge deployment.
- What is A/B testing in deployment?
- Why is user adoption a challenge in deployment?
- What should be monitored after deploying an AI model?
- How was the Aravind Eye Hospital AI deployed?
- Why is the AI Project Cycle called iterative?
- What are the steps involved in AI deployment?
F. Long Answer Questions (75-100 words each)
- Explain the steps involved in deploying an AI solution.
- What are the main challenges in real-world AI deployment? Explain each with examples.
- Compare cloud-based, on-premise, and edge deployment platforms.
- Describe the Aravind Eye Hospital AI deployment — the challenge, the solution, and the impact.
- Why is monitoring important after deployment? What aspects should be monitored?
- Explain the complete AI Project Cycle, briefly describing each of the six stages.
- Plan a deployment strategy for an AI that helps farmers detect crop diseases from smartphone photos.
ANSWER KEY
A. Fill in the Blanks – Answers
- sixth — Deployment is the sixth and final stage.
- ethical — Ethical review checks for biases and fairness.
- drift — Data drift is when patterns change over time.
- A/B — A/B testing compares AI with the old system.
- edge — Edge deployment runs AI on devices.
- 71 — 71 vision centers across Tamil Nadu.
- monitoring — Continuous monitoring tracks performance.
- internet — Cloud-based requires internet connection.
- phased — Phased deployment expands gradually.
- iterative — You can return to earlier stages.
B. Multiple Choice Questions – Answers
- (c) Sixth — Deployment is the sixth stage.
- (c) Skip testing edge cases — Edge cases MUST be tested.
- (b) Real-world data patterns changing over time — Definition of data drift.
- (b) Some users get AI, others get old system — A/B comparison.
- (c) AI runs directly on devices — Edge = on the device.
- (c) Perfect accuracy — Perfect accuracy isn’t a challenge (it’s impossible!).
- (c) Local technicians — Trained technicians operate equipment.
- (c) Accuracy, speed, usage, errors, and feedback — Multiple aspects.
- (b) AI runs but doesn’t affect actual outcomes — Shadow mode for testing.
- (b) Users may not understand or trust the AI — Adoption requires trust.
C. True or False – Answers
- True — Deployment is the sixth and final stage.
- False — Continuous maintenance and updates are needed.
- True — Data drift = real-world changes over time.
- False — Edge deployment works WITHOUT internet.
- True — Monitoring must be continuous.
- False — Cloud deployment sends data to remote servers.
- False — AI assists doctors, doesn’t replace them.
- True — A/B testing compares new AI with old system.
- False — User training is often essential for adoption.
- True — The cycle is iterative.
D. Definitions – Answers
1. Deployment (in AI): The sixth stage of the AI Project Cycle where the trained and evaluated model is put into a production environment for real users, integrated with existing systems, and monitored continuously.
2. Data Drift: The phenomenon where real-world data patterns change over time, causing AI models trained on old data to become less accurate. Regular retraining is needed to address drift.
3. Edge Deployment: Running AI models directly on user devices (phones, cameras, sensors) rather than on remote servers. Enables offline use and keeps data private.
4. Cloud-based Deployment: Running AI models on remote internet servers. Provides unlimited computing power and easy updates, but requires internet connection and data leaves the device.
5. A/B Testing: A deployment approach where some users interact with the new AI system while others use the old system. Results are compared to measure whether AI improves outcomes.
6. Shadow Mode: A deployment approach where AI runs alongside the real system but doesn’t affect actual outcomes. Used to test AI in real conditions without risk.
7. Monitoring (post-deployment): Continuous tracking of AI system performance after deployment, including accuracy, speed, usage patterns, errors, and user feedback to ensure the system continues working correctly.
E. Very Short Answer Questions – Answers
1. What is Deployment?
Deployment is putting the AI model into real-world use where actual users can access it. It’s important because a model is useless until it solves real problems for real people in production environments.
2. Pre-deployment checklist:
Check that: evaluation passed, edge cases tested, documentation complete, ethical review done, legal compliance verified, user acceptance received, error handling exists, and the production environment is ready.
3. Data drift explained:
Data drift occurs when real-world patterns change over time (e.g., new spam tactics). Handle by monitoring performance regularly and retraining the model periodically with fresh, recent data.
4. Cloud vs edge deployment:
Cloud: AI runs on remote servers, needs internet, unlimited power, easy updates. Edge: AI runs on user devices, works offline, limited power, keeps data private. Choose based on privacy, connectivity, and performance needs.
5. A/B testing:
A/B testing deploys AI to some users while others keep the old system. Comparing results shows whether AI actually improves outcomes. It’s a low-risk way to test real performance.
6. User adoption challenge:
Users may not trust AI decisions they don’t understand, prefer familiar systems, or fear job replacement. Solution: provide training, make AI explainable, introduce gradually, keep humans in control.
7. Post-deployment monitoring:
Monitor: accuracy (still correct?), speed (fast enough?), usage (being used?), errors (what’s failing?), feedback (user satisfaction?), fairness (treating all groups equally?).
8. Aravind deployment:
Cloud-based system connected to local imaging equipment. Technicians capture eye images, upload to cloud, AI analyzes, results return in minutes. Deployed across 71 rural vision centers with trained local staff.
9. Why iterative:
After deployment, you may discover problems: data drift, new use cases, bugs. These send you back to earlier stages — more data, better exploration, updated model, re-evaluation, redeployment.
10. Deployment steps:
Prepare production environment, integrate with existing systems (interfaces, databases), deploy the model (big bang/phased/A-B), monitor performance continuously, maintain and update regularly.
F. Long Answer Questions – Answers
1. Steps in AI deployment:
Step 1: Prepare environment — set up servers, databases, infrastructure for real-world scale. Step 2: Integrate with existing systems — connect to user interfaces, databases, other software. Step 3: Deploy model — choose method (big bang, phased, A/B, shadow). Step 4: Monitor continuously — track accuracy, speed, errors, usage, feedback. Step 5: Maintain and update — retrain with new data, fix bugs, scale as needed. Document everything throughout.
2. Deployment challenges:
Data Drift: Real-world patterns change (new spam tricks) — retrain regularly. Edge Cases: Unexpected scenarios (unusual photos) — extensive testing. Scale Issues: 10,000 users slower than 10 — load testing, scalable infrastructure. User Adoption: Users don’t trust/use AI — training, explainability. Ethical Issues: Discrimination, privacy — ethical review, compliance.
3. Platform comparison:
Cloud: Runs on internet servers. Pros: unlimited power, easy updates, no device storage. Cons: needs internet, data leaves device. On-premise: Organization’s own servers. Pros: data stays internal, no internet dependency. Cons: maintenance burden, limited scaling. Edge: Runs on devices. Pros: works offline, private data. Cons: limited power, harder updates. Hybrid combines benefits based on needs.
4. Aravind deployment:
Challenge: 71 rural Tamil Nadu vision centers, limited doctors, patients can’t travel. Solution: Cloud-based AI analyzing retinal images. Local technicians capture images, upload to cloud, AI detects diabetic retinopathy, results in minutes. Impact: Thousands screened who had no eye doctor access. Early disease detection prevents blindness. Doctors focus on treatment. Key success factors: designed for actual users, handled infrastructure limits, kept doctors in loop.
5. Why monitoring matters:
AI can degrade over time due to data drift, bugs, or changing conditions. Monitoring detects problems before users suffer. Track: accuracy (performance decline?), speed (acceptable response times?), usage (adoption working?), errors (crashes, failures?), feedback (complaints?), fairness (bias emerging?). Monitoring enables early intervention and continuous improvement.
6. Complete AI Project Cycle:
- Problem Scoping: Define what problem to solve using 4Ws Canvas.
- Data Acquisition: Collect relevant, quality data from reliable sources.
- Data Exploration: Visualize, clean, understand data, find patterns.
- Modelling: Build AI using rule-based or learning-based approaches.
- Evaluation: Test performance with metrics (accuracy, precision, recall).
- Deployment: Put into production, integrate, monitor, maintain. Cycle is iterative — issues discovered later send you back to improve.
7. Crop disease AI deployment:
Platform: Hybrid — basic detection on phone (edge), complex cases sent to cloud. Integration: Smartphone camera app, farmer database, weather APIs. Deployment: Phased rollout — start with one district, expand after validating. Challenges: Poor internet in villages (hence edge capability), diverse crop types, farmer language/literacy. Monitoring: Track detection accuracy, false alarm rate, farmer satisfaction, disease outcomes. Success metrics: Diseases caught early, crop saved, farmer adoption rate.
Activity Answers
| Question | Suggested Plan |
|---|---|
| Who are the users? | Students (primary), teachers, parents |
| What platform? | Cloud-based with mobile/web app |
| What systems to integrate? | School learning management system, curriculum database, student performance records |
| What could go wrong? | Wrong recommendations, slow response, student privacy concerns, teachers not trusting AI suggestions |
| How to monitor? | Track: materials accessed after recommendation, student performance improvement, user feedback, accuracy of weak-area detection |
This completes the AI Project Cycle!
Next topic: AI Ethics and Bias: Understanding Privacy, Fairness and Human Rights in AI
Previous topic: How to Evaluate AI Models: True Positive, False Positive and Model Accuracy Explained
