Detect deepfakes in seconds

Our advanced machine learning model analyzes facial images and decides whether they are real or have been altered

Why Deepfake Detection Matters

In an era of synthetic media, discerning truth from fiction is increasingly challenging

Misinformation

Deepfakes can spread false narratives, manipulate public opinion, and undermine trust in institutions.

Fraud Prevention

Synthetic identities and voice cloning enable sophisticated financial and social engineering scams.

Media Literacy

We empower users with tools to critically evaluate digital content and combat digital deception.

How Our Detector Works

A streamlined, multi-stage AI pipeline built for interpretability, speed, and real-world reliability.

Face Detection

The system first locates all visible faces using the Haar Cascade detection algorithm. Each detected face is cropped so the analysis focuses only on regions where manipulation typically occurs.

Haar Cascade MTCNN Artifact Detection
Neural network visualization

Ensemble Verification

Final predictions combine results from multiple specialized models with meta-learning to achieve [RESULT] on our test datasets. Results include confidence scores and visual explanations.

LSTM Network Motion Analysis Temporal Features
Video analysis visualization

Meet the Team

Electrical & Computer Engineering Seniors & Faculty Advisor

Jouri Ghazi

Jouri Ghazi

Team Lead / Machine Learning Engineer

Led the team and developed the CNN model architecture, training pipeline, and system integration.

Ashton Bryant

Ashton Bryant

Data Research Analyst

Researched data sources, and ensured data quality.

Jahtega Djukpen

Jahtega Djukpen

Data Engineer

Managed datasets and implemented the feature extraction process to prepare inputs for model training.

Zacary Louis

Zacary Louis

Web Developer

Designed and built the website for showcasing and deploying the project model.

Faculty Advisors

Dr. Picone

Our Commitment to Ethics & Privacy

Privacy First

All uploaded media is immediately deleted.

Transparent Accuracy

Our model achieves [RESULT] ]on test datasets, but we clearly communicate limitations and potential biases.

Responsible Use

This tool is for demonstration purposes only. We prohibit harassment, discrimination, or misuse.

Resources & Documentation

Explore our technical implementation and research