TrackML
TrackML is a full-stack tool for tracking, managing, and analyzing machine learning models. It supports organizing models, extracting insights using AI, and integrating with external APIs for automation and enrichment.
Key Features
- Track details of ML models such as name, type, developer, and parameters
- Organize models using tags and status labels
- Search and filter models by different attributes
- Visual dashboard showing model stats and insights
- Dark mode for better UI experience
- Auto-fill model info from HuggingFace API
- Compare models and generate insights using Google Gemini AI
- Mobile-responsive design
- Secure login and authentication
- Cross-origin request support
- Ready-to-deploy on AWS EC2
Architecture Overview
Frontend
- Built using React 18+ and Vite
- TypeScript used for safer code
- TailwindCSS for styling
- Routing with React Router v6
- Global state handled via Context API
- Axios used for API communication
Backend
- Flask powers the REST API
- SQLAlchemy ORM for database operations (SQLite/PostgreSQL supported)
- JWT used for authentication
- Integrates Google Gemini and HuggingFace APIs
- Flask-CORS enables secure cross-origin communication
- Written in Python 3.8+
Infrastructure
- Hosted on AWS EC2
- Optional use of AWS RDS for databases and S3 for static assets
- Domain and SSL managed through AWS Route 53 and ACM
Nginx Configuration Summary
Reverse Proxy
- Routes requests to backend and frontend
- Supports HTTPS termination
- Handles headers like X-Real-IP and X-Forwarded-For
- Configurable for WebSocket and subdomain routing
Caching
- Uses disk and memory caching
- Supports API and static file caching
- Allows custom cache control rules
- Can bypass or purge cache as needed
Performance
- Enables compression via GZIP or Brotli
- Optimized for large headers and payloads
- Manages keep-alive connections and worker events
Security
- Implements rate limiting using IP zones
- Adds HTTP security headers to prevent XSS, CSRF, and clickjacking
- Configures SSL with strong ciphers and modern protocols
Monitoring
- Logs requests, errors, and performance metrics
- Tracks response times, error rates, and cache efficiency
Deployment
- Local setup requires Python 3.8+, Node.js 16+, and API credentials
- EC2 deployment involves setting up services, Gunicorn, and Nginx
- Frontend is built and served using Nginx with backend proxied on port 5000
API Summary
- Auth endpoints for registration, login, and logout
- Model endpoints for create, read, update, delete, search, and autofill
Project Structure
- Backend: organized into models, routes, services, and config
- Frontend: includes components, pages, services, and types
- Deployment: contains Nginx and systemd setup files
Troubleshooting
- Ensure CORS is configured properly in the backend
- Verify environment variables including API keys and database URLs
- Make sure external services (e.g., Gemini, HuggingFace) are accessible
Licensing and Credits
- Licensed under MIT
- Uses APIs from Google Gemini and HuggingFace
- Hosted using AWS infrastructure