A clean, provider-agnostic message classification service using AI models. Built with modern Python practices, featuring robust configuration management and optimized performance through intelligent caching.
- π€ Dual Classification Modes: Both single-turn and multi-turn conversational classification
- π¬ Context-Aware: Tracks conversation state and intent transitions across multiple turns
- π Rich Output: Detailed responses including intent, confidence, reasoning, and transitions
- βοΈ Provider Agnostic: Currently supports AWS Bedrock Claude with extensible architecture
- βοΈ Smart Configuration: Pydantic Settings with environment variable support
- π High Performance: LRU cached singleton settings for optimal performance
- π Secure Authentication: Cloud provider profile support for secure access
- π¦ Modern Python: Built with Python 3.13+ and Poetry dependency management
# Clone the repository
git clone https://github.com/theakashrai/ai-classifier-sample.git
cd ai-classifier-sample
# Install with Poetry
poetry install
# Or install with pip
pip install ai-classifier-sample
from ai_classifier_sample.service.classifier import MessageClassifier
# Initialize classifier (uses cached settings)
classifier = MessageClassifier()
# Classify a message
message = "Hello, how are you?"
category = classifier.classify(message)
print(f"Message: {message}\nCategory: {category}")
The application uses environment variables for configuration. You can set these in your environment or create a .env
file:
Variable | Description | Default |
---|---|---|
CLOUD_REGION |
Cloud provider region for AI service | us-east-1 |
CLOUD_PROFILE |
Cloud provider profile to use for authentication | None |
MODEL_ARN |
AI model ARN or identifier | arn:aws:bedrock:us-east-1:123456789:inference-profile/us.anthropic.claude-sonnet-4-20250514-v1:0 |
MAX_TOKENS |
Maximum tokens for AI model responses | 5000 |
PROVIDER |
AI service provider type | aws |
# Cloud Provider Configuration
CLOUD_REGION=us-west-1
CLOUD_PROFILE=my-cloud-profile
# AI Model Configuration
MODEL_ARN=arn:aws:bedrock:us-east-1:123456789:inference-profile/us.anthropic.claude-sonnet-4-20250514-v1:0
MAX_TOKENS=5000
PROVIDER=aws
import json
from ai_classifier_sample.service.classifier import MessageClassifier
# Initialize classifier (uses cached settings)
classifier = MessageClassifier()
# Single-turn classification
message = "Hello, how are you?"
result = classifier.classify(message)
# Parse JSON response
response_data = json.loads(result)
print(f"Message: {response_data['message']}")
print(f"Category: {response_data['category']}")
from ai_classifier_sample.service.classifier import MessageClassifier, ConversationState
# Initialize classifier and conversation state
classifier = MessageClassifier()
conversation_state = ConversationState()
# Multi-turn conversation
messages = [
"Hi, I need help with my order",
"I placed it last week but haven't received tracking info",
"The order number is #12345"
]
for message in messages:
response = classifier.classify_conversational(message, conversation_state)
print(f"Intent: {response.intent}")
print(f"Transition: {response.intent_transition}")
print(f"Confidence: {response.confidence}")
from ai_classifier_sample.config.settings import get_settings
# Get cached settings instance
settings = get_settings()
print(f"βοΈ Cloud Region: {settings.cloud_region}")
print(f"π€ Model ARN: {settings.model_arn}")
print(f"π§ Max Tokens: {settings.max_tokens}")
Run the tests using pytest:
# Run all tests
poetry run pytest
# Run tests with verbose output
poetry run pytest -v
# Run specific test file
poetry run pytest tests/test_settings.py
# Run with coverage
poetry run pytest --cov=ai_classifier_sample --cov-report=html
# Run legacy test script directly
poetry run python tests/test_settings.py
- βοΈ Settings: Pydantic Settings with LRU cache for singleton pattern
- π€ Classifier: Message classification service using AI models
- π Environment Variables: Full support for configuration via environment variables
- π Cloud Provider Profile: Automatic cloud provider profile setting when specified
- π¦ Modern Dependencies: Poetry for dependency management and virtual environments
# Clone the repository
git clone https://github.com/theakashrai/ai-classifier-sample.git
cd ai-classifier-sample
# Install development dependencies
poetry install --with dev
# Install pre-commit hooks
poetry run pre-commit install
# Format code
poetry run black .
poetry run isort .
# Remove unused imports
poetry run autoflake --remove-all-unused-imports --recursive --in-place .
# Run linting
poetry run flake8 .
# Type checking (if mypy is added)
# poetry run mypy src/
# Run all tests
poetry run pytest
# Run tests with coverage
poetry run pytest --cov=ai_classifier_sample --cov-report=html
# Run specific test categories
poetry run pytest -m unit
poetry run pytest -m integration
Currently supported with Claude models:
- Claude 4: Next-generation model with advanced capabilities
- Claude 3.5 Sonnet: Latest and most capable model with enhanced reasoning
- Claude 3.5 Haiku: Fast, cost-effective option with improved performance
- Claude 3 Sonnet: Balanced performance and speed
- Claude 3 Haiku: Ultra-fast and lightweight for simple tasks
- Claude 3 Opus: Maximum performance for complex reasoning tasks
The architecture is designed to be extensible. To add a new provider:
- Create a new provider class in
src/ai_classifier_sample/providers/
- Implement the required interface methods
- Update the configuration settings
- Add provider-specific tests
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature
) - Make your changes and add tests
- Run the test suite (
poetry run pytest
) - Run code quality checks (
poetry run black . && poetry run flake8 .
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
- Follow PEP 8 style guidelines
- Use type hints for all function signatures
- Write comprehensive tests for new features
- Document public APIs with docstrings
- Keep commits atomic and well-described
This project is licensed under the MIT License - see the LICENSE file for details.
- boto3: AWS SDK for Python
- langchain-aws: LangChain AWS integration
- langgraph: Graph-based AI workflows
- langchain: Framework for developing LLM applications
- pydantic-settings: Settings management with validation
- black: Code formatting
- flake8: Linting and style checking
- pytest: Testing framework
- pre-commit: Git hooks for code quality
- isort: Import sorting
- autoflake: Unused import removal
- β Provider-agnostic design for future extensibility
- β Separation of concerns with clear module boundaries
- β Configuration management with validation
- β Performance optimization through intelligent caching
- β Modern Python 3.13+ with type hints
- β Poetry for reliable dependency management
- β Comprehensive testing with pytest
- β Code quality tools integrated
- β Clear documentation and examples
- β Environment-based configuration
- β Secure cloud provider authentication
- β Error handling and logging
- β Extensible architecture for scaling
Built with β€οΈ using modern Python practices