Our advanced machine learning model analyzes facial images and decides whether they are real or have been generated
In an era of synthetic media, discerning truth from fiction is increasingly challenging
Deepfakes can spread false narratives, manipulate public opinion, and undermine trust in institutions. Research shows exposure to deepfakes significantly increases distrust in government and erodes confidence in democratic systems.
Synthetic identities and face-swapped images enable sophisticated financial and social engineering scams. Women make up 96% of deepfake pornography victims, and 67% of image-based abuse victims experience negative mental health effects.
We empower users with tools to critically evaluate digital content and combat digital deception. Our detector helps protect vulnerable groups disproportionately affected by deepfakes and promotes responsible media consumption.
A streamlined, multi-stage AI pipeline built for interpretability, speed, and real-world reliability.
Complete pipeline from image upload to classification results
Front-end and back-end components working together for deepfake detection
The system offers two face detection methods: Haar Cascade and MTCNN. Each detected face is cropped and analyzed individually, focusing on regions where manipulation typically occurs. The Haar Cascade provides fast CPU-based detection, while MTCNN offers higher accuracy for complex images.
Three classification models are available: Random Forest with DCT features, our custom DeepTRUTH CNN, and EfficientNet B0. Our best model (EB0) achieves 98.90% accuracy on evaluation data, with 98.10% precision and 98.36% recall. Results include confidence scores and visual explanations.
Deep dive into the technical implementation of our three classification approaches
Ensemble learning method using DCT features and multiple decision trees. Fast baseline achieving 55% accuracy with efficient computational requirements.
Our custom DeepTRUTH convolutional neural network with multiple layers for automatic feature extraction. Achieves 94.44% accuracy on evaluation data.
State-of-the-art architecture using transfer learning from ImageNet. Our best performing model achieving 98.90% accuracy with optimal efficiency.
Comprehensive evaluation across training, development, and validation datasets
Performance metrics showing detection and classification speed across different file sizes
Statistical validation confirming meaningful performance improvements between models
Electrical & Computer Engineering Seniors & Faculty Advisor
Team Lead / Machine Learning Engineer
Led the team and developed the CNN model architecture, training pipeline, and system integration.
Data Research Analyst
Researched data sources, and ensured data quality.
Data Engineer
Managed datasets and implemented the feature extraction process to prepare inputs for model training.
Web Developer
Designed and built the website for showcasing and deploying the project model.
Dr. Joseph Picone
Professor, Electrical & Computer Engineering
All uploaded media is immediately deleted.
Our model achieves 98.90% accuracy on test datasets, but we clearly communicate limitations and potential biases. Performance may vary on images outside our training distribution.
This tool is for demonstration purposes only. We prohibit harassment, discrimination, or misuse.
Explore our technical implementation and research