An AI-based API server for clothing classification and defect detection. Uses PyTorch TorchScript models to classify clothing images into 15 categories and detect defects.
- Clothing Classification: Classify clothing types through image URLs
- Defect Detection: Detect clothing defects such as tears, stains, wear, etc.
- High Performance: Fast inference with TorchScript optimized models
- Multiple Deployment Options: Docker local, AWS Elastic Beanstalk, Docker Compose + Nginx
- Scalability: AWS cloud-based auto-scaling support
- jacket
- short pants
- tailored pants
- jumper
- shirts
- coat
- dress
- casual pants
- blouse
- tshirts
- skirt
- ripped
- pollution
- tearing
- frayed
- Framework: FastAPI
- AI/ML: PyTorch, TorchScript, OpenCV
- Image Processing: PIL, NumPy
- Deployment: Docker, Nginx
- Runtime: Python 3.10
- Clone Repository
git clone <repository-url>
cd reward-closet-ai-api-server
- Setup Virtual Environment
python -m venv .venv
source .venv/bin/activate # Linux/Mac
# or
.venv\Scripts\activate # Windows
- Install Dependencies
pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu
pip install -r requirements.txt
- Run Server
uvicorn src.app:app --host 0.0.0.0 --port 8000 --reload
- Build Docker Image
docker build -t reward-closet-ai-api-server .
- Run Container
docker run -p 8000:8000 reward-closet-ai-api-server
- Run Full Stack with Docker Compose
docker-compose up -d
- AWS CLI installed and configured
- Elastic Beanstalk CLI (EB CLI) installed
- Docker Hub account (for image push)
- Build and Push Docker Image
# Build image
docker build -t abjin/reward-closet-ai-api-server:latest .
# Push to Docker Hub
docker push abjin/reward-closet-ai-api-server:latest
- Initialize Elastic Beanstalk Environment
# Initialize EB
eb init
# Application name: reward-closet-ai-api-server
# Platform: Docker
# Region: Select desired AWS region
- Create Environment and Deploy
# Create environment
eb create production
# Deploy
eb deploy
- Check Environment Status
# Check status
eb status
# Check logs
eb logs
# Open application
eb open
The Dockerrun.aws.json
file included in the project defines the configuration for running Docker containers in Elastic Beanstalk:
- Image:
abjin/reward-closet-ai-api-server:latest
(pulled from Docker Hub) - Port: 8000 (FastAPI application port)
- Auto Update: Automatic application of new image versions
When this file is in the project root, Elastic Beanstalk automatically recognizes and uses it for deployment.
The following environment variables can be set in the Elastic Beanstalk console:
PORT
: 8000 (default)PYTHONPATH
: /app- Other required environment variables
Deploy in production environment with Nginx proxy:
# Deploy full stack with Docker Compose
docker-compose up -d
# Or run automated build and deployment script
./start.sh
The start.sh
script automatically performs the following tasks:
#!/bin/bash
# Build Docker image
docker build . --tag abjin/reward-closet-ai-api-server:latest
# Push image to Docker Hub (for Elastic Beanstalk deployment)
docker push abjin/reward-closet-ai-api-server:latest
When you run this script:
- Docker image is built with the latest code
- Image is uploaded to Docker Hub
- Elastic Beanstalk can automatically deploy the new image
- Local Development:
http://localhost:8000
- Docker Compose:
http://localhost
(through Nginx proxy) - AWS Elastic Beanstalk:
http://your-app-name.region.elasticbeanstalk.com
- Production: Custom domain (if configured)
POST /models/clothes/predict
Content-Type: application/json
{
"url": "https://example.com/image.jpg"
}
Response Example:
{
"top1ClassName": "tshirts",
"top1Score": 0.95
}
GET /health
After running the server, you can view auto-generated API documentation at:
- Swagger UI:
http://localhost:8000/docs
- ReDoc:
http://localhost:8000/redoc
reward-closet-ai-api-server/
βββ src/
β βββ app.py # FastAPI main application
β βββ api/ # API routes
β β βββ models.py # Model prediction router
β β βββ health_check.py # Health check router
β βββ service/ # Business logic
β β βββ models.py # Model inference service
β βββ ai/ # AI model related
β β βββ session/ # Model sessions
β β β βββ clothes.py # Clothing classification model
β β βββ torchscript/ # TorchScript model files
β βββ dto/ # Data transfer objects
β β βββ models.py # API request/response models
β βββ exception_handler.py # Exception handling
βββ docker-compose.yml # Docker Compose configuration
βββ Dockerfile # Docker image build configuration
βββ Dockerrun.aws.json # Elastic Beanstalk Docker configuration
βββ nginx.conf # Nginx proxy configuration
βββ requirements.txt # Python dependencies
βββ start.sh # Deployment script
βββ README.md # Project documentation
fastapi==0.115.6
uvicorn==0.32.1
torch==2.2.2
torchvision==0.17.2
opencv-python==4.10.0.84
pillow==11.0.0
numpy==1.26.4
requests==2.32.3
pydantic==2.10.3
- Follow Python PEP 8
- Use type hints when possible
- Data validation through Pydantic models
- Place new TorchScript models in the
src/ai/torchscript/
directory - Update model path and labels in
src/ai/session/clothes.py
- Modify preprocessing logic if necessary
- TorchScript: Optimized model inference speed
- CPU Only: Improved versatility by removing GPU dependencies
- Image Preprocessing: Efficient image processing with OpenCV and PIL
- NMS: Non-Maximum Suppression for duplicate detection removal
-
Model File Missing
- Check if TorchScript model file is in the correct path
- Verify model file permissions
-
Image Load Failure
- Check image URL accessibility
- Verify supported image formats (JPEG, PNG, etc.)
-
Memory Issues
- Resize images if they are too large
- Adjust batch size for batch processing
-
Deployment Failure
- Verify image is correctly pushed to Docker Hub
- Check image name in
Dockerrun.aws.json
file - Verify AWS permissions configuration
-
Application Start Failure
- Check detailed logs with
eb logs
- Verify environment variable configuration
- Check port configuration (default: 8000)
- Check detailed logs with
-
Performance Issues
- Check EC2 instance type (CPU intensive tasks)
- Review load balancer configuration
- Review Auto Scaling settings
-
Container Build Failure
- Check Docker image size (can be large due to PyTorch)
- Check for errors during dependency installation
- Verify network connectivity
-
Container Runtime Errors
- Check for port conflicts
- Verify volume mount permissions
- Check environment variable configuration
MIT License
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
If you encounter any issues or have questions, please reach out through the Issues tab.