Project Overview

Our FairHire AI is a generative AI-powered application screening tool designed to help various organizations automate the initial review of resumes while ensuring compliance with employment law and bias mitigation requirements. Our system design combines resume screening, bias detection, and audit logging, human-in-the-loop to create a transparent, auditable hiring process.
Our AI agent FairHire AI is designed to solve multiple problems in a recruiting environment.
Operational: it helps the business screen through hundreds of resumes quickly to pull minimum qualified candidates to review. The agent will review applicants' resumes for education, experience and minimum requirements against the advertised position and provide a match score. Recruiter then will have to go in and validate each applicant to either confirm or override the agent's output. Applicants who meet the minimum criteria will then be included on a pdf report which will be generated by agent so that it can be forwarded to the hiring manager for possible interview.

Compliance: The AI agent includes data testing to provide bias review and ensure compliance with NY 144 law, Equal Employment Opportunity Commission and compliance with EU AI Act Article 10. The bias testing can be completed in the application itself using administrator account. The administrator will also have access to review audit logs for last 120 days that can be exported as JSON.

Key Innovation
Our application FairHire AI design integrates bias auditing, PII scrubbing, and explainable scoring directly into the application workflow. All application activities and hiring decision are logged, bias is automatically detected utylizing integarted Fairlearn tools, and human override is mandatory—ensuring human in the loop.

🎯 Primary Goal

Accelerate candidate screening, reducing time to hire, reducing algorithmic bias by utylizing structured testing and continuous monitoring.

⚖️ Compliance Focus

Our team cocused primarily on the NYC Local Law 144 (bias audit reporting, 120 storage), EU AI Act (high-risk system governance), and Title VII employment law requirements.

🔍 Unique Control

PII scrubbing before ML inference, disparate impact ratio monitoring (EEOC 4/5ths rule), and immutable audit logs for every decision.

👥 Human-in-Loop

AI recommends; humans decide. Recruiters must override or confirm every recommendation with documented reasoning.

Project Artifacts & Links

Resource Link Purpose
Test Application FairHire AI Live Functional MVP for bias testing and resume screening
Code Repository GitHub Repo Complete source code, deployment config, and documentation
Technical Spec Spec Sheet Architecture, data model, and tech stack details

Operational Challenge

Organizations with in-house recruitment teams struggle to review hundreds or thousands of resumes efficiently. Manual screening is:

  • Time-consuming: Recruiters spend 5–10 minutes per resume on initial screening
  • Inconsistent: Different reviewers apply different standards, leading to unconscious bias
  • Error-prone: Qualified candidates are missed; unqualified applicants advance

Compliance Challenge

Hiring decisions increasingly face scrutiny from regulators and applicants. Without proper documentation and bias testing:

  • NYC Local Law 144: Requires annual bias audits for any AI used in hiring; penalties up to $1,000 per violation
  • EU AI Act: Classifies hiring AI as high-risk; requires human review, bias testing, and governance documentation
  • Title VII (Equal Employment Opportunity): Prohibits discrimination; requires evidence of non-discriminatory practices
  • Disparate Impact Theory: Even well-intentioned systems can be illegal if outcomes show 4/5ths rule violations (e.g., selection rate 50% for one group vs. 65% for another)

FairHire AI Solution

FairHire AI solves both problems by combining speed with compliance:

  • Automatically scores resumes against job criteria (skills, experience, education, certifications)
  • Scrubs PII before scoring to reduce demographic bias
  • Continuously tests for adverse impact using EEOC 4/5ths rule
  • Logs every decision, override, and review for audit defense
  • Generates annual bias audit reports for regulatory compliance

Target Entity

Medium to large organizations (500–5,000 employees) with in-house recruitment teams who screen 100+ applicants per month and are subject to NYC and/or EU employment regulations.

External Regulatory Stakeholders

NYC Department of Consumer & Worker Protection (DCWP)

Enforces Local Law 144. Requires annual bias audit reports to be posted publicly. FairHire AI automatically generates and exports audit reports in required format.

EU AI Act Regulators

Classify hiring AI as high-risk. Require technical documentation, logging, human oversight, and bias testing. FairHire AI includes full governance chain and audit trail.

EEOC (US Equal Employment Opportunity Commission)

Enforces Title VII. Requires evidence of non-discriminatory hiring practices. FairHire AI logs disparate impact metrics and provides defense documentation.

Independent Auditors

Required by NY Law 144 to conduct annual third-party bias audits. FairHire AI provides full audit trail, model documentation, and test results for independent verification.

Internal Organizational Stakeholders

Chief Human Resources Officer (CHRO)

Business owner. Sets hiring strategy, approves job descriptions, reviews shortlists, and ensures fair hiring outcomes across the organization.

HR Specialists & Recruiters

Primary users. Upload job descriptions, screen resumes, override recommendations with documented reasons, and manage candidate communication.

Legal Compliance Office

Ensures regulatory alignment. Reviews governance policies, JD templates, audit reports, and responds to regulatory inquiries or candidate challenges.

Responsible AI Lead / Data Science

Conducts bias testing, maintains model performance, documents fairness metrics, and alerts organization to drift or adverse impact.

Software Engineering Team

Maintains application, ensures PII scrubber works correctly, updates LLM prompts, manages database retention policies (120-day deletion), and fixes bugs.

IT Security

Prevents unauthorized data access, conducts security scans, ensures no API leakage, and monitors data residency compliance.

External User Stakeholders

Job Candidates

Submit resumes and receive notifications of application status. Informed that review includes AI and have right to human review on appeal. Consent required before resume is processed.

Hiring Managers

Review FairHire AI shortlists, make final hiring decisions, and document reasoning for all overrides or rejections.

Model & Infrastructure

AI Model: Claude 3 Haiku (claude-3-haiku-20240307) via Anthropic API
Rationale: Fast, low-cost structured extraction; strong instruction following for resume parsing; Constitutional AI training reduces demographic bias.

Data Inputs

  • Resumes: User-uploaded PDFs and DOCX files (mime type & size validated)
  • Job Descriptions: Stored in database with required skills, experience, education, and certifications
  • Test Datasets: Kaggle Resume Dataset (200K candidates, 24 categories) + team-generated synthetic test resumes for bias testing

Core Workflow

Resume Ingestion Pipeline:
File upload → MIME/size validation → Text extraction (pdf-parse or mammoth) → PII scrubbing (10 regex categories) → Scrub validation → Database persistence

Screening Workflow:
Anonymized resume text + job criteria → Claude API → Structured JSON response → Validation for prohibited content → Score computation (skills 40pt, experience 35pt, education 15pt, certifications 10pt) → Results + prompt logged to database

AI Agents

Bias Detection Agent

Generates 6 synthetic resumes with identical qualifications but varying demographic name signals. Submits all six to Claude. Computes Disparate Impact Ratio (DIR) using EEOC 4/5ths rule. Flags if DIR < 0.80 for any protected group.

Selection Rate Monitor

Continuous monitoring after each evaluation. Alerts if demographic selection rates diverge beyond threshold. Prevents shortlist advancement until reviewed.

Audit Logging Agent

Captures every action (resume upload, score, override, appeal) with timestamp, user ID, and decision state. Immutable append-only design.

Compliance Export Agent

Generates NYC Local Law 144 compliant PDF reports and JSON exports of audit artifacts for regulatory submission.

User Roles & Access Control

  • Administrator: Bias testing, compliance PDF export, audit log access, dataset loading
  • HR Recruiter: Upload job descriptions, create jobs, screen resumes, correct/override ratings
  • Applicant: Upload resume, delete profile, submit appeals

Human Review Points

  • Consent Gate: Applicant must click "I consent" before resume is processed
  • Recruiter Override: Required to decide: advance, hold, reject, or escalate to human review. Override logged with reason.
  • Applicant Appeal: Users can appeal AI decisions. Recruiter reviews and responds (SLA: 10 business days).
  • Bias Alert Review: If adverse impact detected, admin must manually review job criteria before continued use.

Outputs & Reports

  • Hiring Manager report with recommended resumes
  • Applicant list with scores and explanations
  • Bias audit report (quarterly)
  • NYC Local Law 144 compliance exports (PDF + JSON)
  • Audit trail (immutable logs of all decisions)

Design Stage Risks

⚠️ Regulatory Misalignment Risk

Misinterpreting NYC Local Law 144 or EU AI Act requirements could lead to non-compliance and penalties.

Mitigation: Legal review of all design documents. External compliance audit. Annual governance review.

Build Stage Risks

⚠️ Representative Data Risk

Kaggle test resumes may not reflect current labor market or real hiring patterns. May not catch biases relevant to actual candidate pools.

Mitigation: Supplement Kaggle data with real anonymized resumes. Generate synthetic test resumes covering demographics and role types. Validate against historical hiring data.

⚠️ Historical Bias Risk

Training data may perpetuate past discriminatory hiring patterns (e.g., gender bias in tech hiring).

Mitigation: Bias audit on test data before deployment. Fairness constraints in scoring. PII scrubbing before ML inference.

Validation Stage Risks

⚠️ Independent Testing Gaps

Internal bias testing may miss edge cases or be insufficiently rigorous to satisfy regulators.

Mitigation: External third-party bias audit. Test against EEOC 4/5ths rule. Document all test results.

Deployment Stage Risks

⚠️ Legal Liability Risk

Incorrect scoring could expose organization to discrimination lawsuits, EEOC complaints, or regulatory penalties.

Mitigation: Explainable scores with audit trail. Human override mandatory. Adverse impact monitoring. Insurance coverage.

⚠️ Data Privacy Risk

Resumes contain sensitive PII (address, phone, graduation year, etc.). Breach could violate GDPR/CCPA and harm applicants.

Mitigation: PII scrubbing before ML. Encryption at rest (S3/database). 120-day data deletion policy. Access controls.

⚠️ Prompt Injection Risk

Malicious resume content (crafted to break JSON parsing) could cause system errors or bypass guardrails.

Mitigation: Input validation. JSON schema enforcement. Error handling for malformed responses. Security scanning.

Operating Stage Risks

⚠️ Audit Failure Risk

Missing logs or incomplete audit trail could fail regulatory inspection. Data loss could create compliance gaps.

Mitigation: Immutable append-only audit logs. Daily backups. Monthly audit log review. Automated alerts for missing data.

⚠️ Model Drift Risk

Resume formats, job markets, and candidate pools change over time. Model performance may degrade or bias may increase.

Mitigation: Continuous monitoring dashboards. Monthly bias metric reviews. Quarterly retraining. Alert thresholds.

Evolution Stage Risks

⚠️ Regulatory Change Risk

New laws or regulatory guidance (e.g., state AI bills) could require system changes. Failure to adapt could trigger non-compliance.

Mitigation: Legal team monitors regulatory landscape. Annual governance framework review. Policy update process. Staff retraining on changes.

Key Risk Management Priorities

Bias Audit Data Integrity Remove Proxy Variables Secure PII Prompt Injection Prevention Data Drift Monitoring

Governance Principles

FairHire AI operates under three core principles:

  • Transparency: Every decision is logged and explained. Users see why they were scored and how.
  • Accountability: Clear roles and escalation paths. Humans make final hiring decisions; AI provides recommendations.
  • Compliance: Built-in audit trails, bias testing, and regulatory export for NY 144 and EU AI Act.

Control Architecture

Preventive Controls

  • PII Scrubbing: Remove name, address, graduation year, photos before ML scoring. Reduces demographic signal leakage.
  • Job Description Governance: Legal review of JDs before entry. Avoid coded language that signals demographics.
  • Fairness Constraints: Scoring weighted to prioritize skills and experience over proxy signals.

Detective Controls

  • Continuous Adverse Impact Monitoring: After each screening batch, check 4/5ths rule. Alert if violation detected.
  • Immutable Audit Logs: Every action logged with timestamp and user ID. Logs cannot be modified retroactively.
  • Quarterly Bias Audits: Formal statistical analysis of selection rates across protected groups. Report to leadership.

Corrective Controls

  • Override Documentation: Any deviation from AI recommendation must be documented with reason. Patterns reviewed quarterly.
  • Escalation Process: Bias alerts block shortlist advancement until CHRO reviews and approves.
  • Incident Response: Bias allegations trigger investigation and remediation plan within 48 hours.

Operating Model Roles

CHRO / Head of Talent (Business Owner)

Owns hiring strategy and fair outcomes. Reviews shortlists, bias reports, and approves high-impact overrides. Accountable for compliance.

HR Specialists / Recruiters (Users)

Screen resumes using FairHire AI. Document overrides. Manage candidate communication. Required to complete annual bias training.

Legal / Compliance Officer

Reviews policies, job descriptions, and audit reports. Ensures regulatory alignment. Responds to candidate appeals and regulatory inquiries.

Responsible AI Lead / Data Science

Conducts bias audits, maintains fairness metrics, alerts to model drift, and documents test results for audit defense.

Decision Authority Matrix

Decision Authority Input Required
Job description approval (before use) Legal + CHRO HR, Data Science (bias review)
Advance candidate from shortlist Recruiter FairHire AI score, explanation
Override/reject top-ranked candidate Recruiter + CHRO (documented) FairHire AI score, documented reason
Approve shortlist after bias alert CHRO Responsible AI Lead assessment
Respond to bias allegation Legal + CHRO Audit logs, bias audit results
Update system policies CHRO + Legal Data Science feedback, regulatory guidance

Escalation Procedures

Bias Alert (4/5ths Rule Violation)

Trigger: Selection rate for any protected group < 80% of highest rate.
Action: Shortlist held. CHRO notified. Responsible AI Lead provides assessment. Shortlist cannot advance without CHRO sign-off.
SLA: 4 hours for assessment and decision.

Candidate Bias Allegation

Trigger: Candidate claims discrimination in FairHire AI screening.
Action: Legal notified immediately. Appeal logged. Investigation conducted. Candidate receives written response.
SLA: 10 business days for substantive response.

Model Performance Degradation

Trigger: Accuracy drops > 5%, or bias metrics diverge significantly.
Action: Alert to Data Science. Investigation into root cause. Potential retraining or rollback to prior version.
SLA: Assessment within 48 hours. Remediation within 2 weeks.

Primary Data Sources

Kaggle Resume Dataset

Source: Resume Screening Dataset (200K candidates) from LiveCareer.com (web-scraped)
Coverage: 24 job categories: HR, Designer, IT, Teacher, Healthcare, Finance, Engineering, etc.
Use: Initial training and bias testing. Anonymized resumes for synthetic demographic testing.

Team-Generated Test Resumes

Source: Group 4 creates synthetic resumes with controlled demographic variations
Purpose: Bias audit. Six resumes with identical skills but different name signals to test for disparate impact.
Coverage: Engineering, HR, Finance, Sales, and other key roles.

Real Anonymized Resumes (Future)

Source: Organization's historical hiring data (with PII removed)
Purpose: Post-launch validation. Verify model performance on real candidate pools.
Governance: GDPR/CCPA compliant. Requires consent or anonymization. Legal review required.

Data Quality Assurance

Resume Quality Checks

  • File format validation (PDF, DOCX only)
  • File size limits (< 10 MB)
  • Text extraction validation (ensure text is readable, not corrupted)
  • PII detection and removal (10+ regex patterns)
  • Length validation (must contain extractable text)

Job Description Quality Checks

  • Completeness: required skills, experience, education, certifications populated
  • Bias review: avoid coded language (e.g., "digital native", "team fit")
  • Legal review: ensure JD complies with employment law

Scoring Validation

  • JSON schema validation on Claude response
  • Score range validation (0–100)
  • Weight verification (40+35+15+10 = 100%)
  • Spot check: manual review of 5–10 scored resumes weekly

Data Retention & Privacy

Data Type Retention Period Deletion Method
Raw resume files (PDFs/DOCX) 7 years (legal hold) Encrypted in S3; deleted via lifecycle policy
Extracted resume data (anonymized) 120 days post-hire/no-hire Database deletion; Pinecone namespace deletion
Audit logs 3 years (compliance) Archived offline after 1 year
Bias audit reports Indefinite (regulatory requirement) Stored securely; PII references removed

Data Risks & Mitigations

🔴 Risk: Historical Bias in Training Data

Impact: Model perpetuates past discriminatory patterns.
Mitigation: Bias audit on Kaggle data before use. Remove correlated proxy variables. Fairness constraints in scoring. PII scrubbing.

🔴 Risk: Unrepresentative Test Data

Impact: Model may not generalize to real candidate pools.
Mitigation: Supplement Kaggle with team-generated synthetic data. Post-launch validation on real anonymized data. Continuous monitoring.

🔴 Risk: Data Breach / PII Exposure

Impact: Unauthorized access to resumes; GDPR/CCPA violations; candidate harm.
Mitigation: Encryption at rest (S3, database). TLS in transit. Access controls (RBAC). PII scrubbing before ML. 120-day deletion. Regular security audits.

🟡 Risk: Data Quality Degradation

Impact: Poor quality resumes or JDs lead to inaccurate scores.
Mitigation: Automated quality checks on upload. Manual spot-check weekly. Clear user guidelines. Data validation on extraction.

Fallback Plan

Data Loss Scenario
Scenario: Database corruption or ransomware attack.
Fallback: Restore from daily encrypted backups (max 24-hour lag). Notify CHRO, legal, and affected candidates. Pause new screening until verified clean. Audit logs restored separately from immutable offsite archive.
Model Degradation Scenario
Scenario: Bias detected in Claude API version update.
Fallback: Rollback to prior tested version. Pause new screening. Retest and retrain if needed. Notify CHRO of issue. Report to regulators if adverse impact detected in shortlists.

Core Architecture

Component Technology Purpose
Frontend UI React, TypeScript, Tailwind CSS Resume upload, job screening, bias reports, admin dashboard
Backend API Node.js + Express (or FastAPI in Python) Resume processing, scoring, audit logging, regulatory exports
Database PostgreSQL Resume data, job descriptions, scores, audit logs, user profiles
File Storage AWS S3 (encrypted) Original resume PDFs/DOCX files (raw, not processed)
LLM API Claude 3 Haiku (Anthropic) Structured resume extraction, scoring explanation generation
Deployment Railway.app (current MVP) Containerized app deployment with auto-scaling
Monitoring DataDog or CloudWatch API latency, error rates, bias metrics, audit log integrity

Why These Choices?

Claude 3 Haiku (LLM)

Why: Fast, low-cost, structured JSON output, strong instruction following. Constitutional AI training reduces demographic bias vs. base models. Smaller context fits within HIPAA/GDPR requirements for no model training on user data.

PostgreSQL (Database)

Why: Mature, reliable, supports ACID transactions (critical for audit integrity). Audit logs are immutable append-only tables. Strong encryption and access control options.

AWS S3 (File Storage)

Why: Enterprise-grade encryption, bucket policies, versioning. Original resume files stored encrypted and separate from processed data. Lifecycle policies enable 7-year retention and automated deletion.

React Frontend

Why: Component reusability for resume cards, score widgets, bias dashboards. TypeScript prevents type-related bugs during rapid development.

Integration Justification

Resume Parsing

Tools: pdf-parse (PDF) + mammoth (DOCX)
Justification: Open-source, lightweight. Extract text preserves minimal structure; sufficient for regex PII scrubbing and LLM extraction.

PII Scrubbing

Approach: Regex patterns (10+ categories: name, address, phone, email, DOB, graduation year, photo references)
Justification: Deterministic, interpretable, and auditable. No ML model needed. Ensures PII never reaches Claude API.

Bias Testing

Method: Disparate Impact Ratio (EEOC 4/5ths rule). Synthetic resume generation with demographic name variations.
Justification: EEOC standard. Defensible in legal/regulatory contexts. Automated and continuous.

Audit Logging

Approach: Immutable append-only PostgreSQL table. Payload hashed (not stored as PII).
Justification: Tamper-evident. Satisfies GDPR/CCPA audit requirements. Queryable for compliance investigations.

Deployment Rationale

Current: Railway.app
MVP deployment. Fast setup, built-in PostgreSQL, easy scaling, good for testing.

Production Option: AWS ECS
Enterprise deployment. Full VPC isolation, compliance certifications (SOC 2), CloudTrail audit logging, multi-region support.

Implementation Strategy

FairHire AI is built with governance integrated at every stage. Rather than bolting on compliance after development, we embed:

  • Bias testing in the development pipeline (pre-deployment)
  • Audit logging from day one (immutable decision trail)
  • Human review mandatory at key gates (recruiter override, bias alert approval)
  • Regulatory exports automated (NYC 144 reports, EU AI Act documentation)

Benchmark Gates

No feature is released without passing these governance benchmarks:

✓ PII Scrubber: 100% Accuracy

Regex patterns tested against 1000+ resume samples. False negative rate (missed PII) < 1%. Manual audit weekly.

✓ Bias Testing: 0.80 Disparate Impact Ratio (4/5ths Rule)

If any protected group has selection rate < 80% of highest group, release blocked. Must debug and retest.

✓ Security Scan: Zero API Leakage

Code scanned for hardcoded credentials, API key exposure, PII in logs. Required before each production release.

✓ Audit Trail: Complete Decision Logging

Every action logged: resume upload timestamp, user ID, score, override reason, timestamp. Immutable after write.

✓ Human-in-Loop: Mandatory Confirmation

Recruiters must explicitly confirm or override every recommendation. No batch auto-advance. All overrides documented.

Governance Artifacts (Built-In)

  • SR 11-7 Style Model Inventory: Catalog of all ML/AI models, versions, training data, performance metrics
  • Risk-Tiering Algorithm: Classifies hiring AI as "high-risk" with documented justification
  • Bias Testing Dashboard: Real-time view of selection rates, disparate impact alerts, audit logs
  • Automated Release Gate Workflow: No deployment until benchmarks passed + human approval

Regulatory Compliance Roadmap

NYC Local Law 144 Compliance

  • Annual bias audit (independent third-party)
  • Public posting of bias audit results
  • Algorithm transparency notice to candidates
  • Right to human review on appeal

EU AI Act Compliance (High-Risk)

  • Technical documentation (model card, data sheets)
  • Human oversight procedures (CHRO sign-off on bias alerts)
  • Adverse impact monitoring (continuous 4/5ths rule checking)
  • Audit trail and explainability (decision logs, score explanations)

Title VII / EEOC Compliance

  • Disparate impact testing (EEOC 4/5ths rule)
  • Documentation of non-discriminatory intent
  • Audit defense package (test results, policy documentation)

Validation Framework

Unit Testing (Code Level)

  • PII scrubber: test 100+ regex patterns against known PII examples
  • Score computation: test weight calculation (40+35+15+10 = 100)
  • JSON parsing: test malformed responses, missing fields, type validation
  • Database queries: test audit log writes, immutability, query performance

Integration Testing (End-to-End)

  • Resume upload → text extraction → PII scrubbing → Claude API → scoring → database write
  • Override workflow: recruiter decision → audit log → email notification
  • Bias alert workflow: selection rates calculated → 4/5ths rule check → escalation to CHRO

Bias Testing (Model Fairness)

  • Disparate Impact Testing: Generate 6 synthetic resumes (identical skills, different demographic names). Compute selection rate ratio for each group. Pass if all ratios ≥ 0.80.
  • Fairness Metrics: Measure demographic parity, equalized odds, and predictive parity across gender, race, age, and disability proxies.
  • Edge Case Testing: Underrepresented groups, non-traditional career paths, gaps in resume, etc.

Security Testing

  • Penetration testing (authorized security scan)
  • API key/credential scanning (code review, secret detection)
  • PII leakage test (prompt injection with PII in resume text)
  • Access control validation (RBAC testing)

Compliance Testing

  • Audit log completeness: verify every action logged with timestamp and user ID
  • Data deletion verification: confirm GDPR "right to erasure" works end-to-end
  • Regulatory export validation: NYC 144 report format, required fields

Sign-Off Criteria

Component Test Requirement Sign-Off Authority
PII Scrubber False negative rate < 1% on 1000-sample validation set Data Science Lead
Bias Audit Disparate Impact Ratio ≥ 0.80 for all protected groups Responsible AI Lead + Legal
Scoring Algorithm 100% of scores fall within valid range (0–100); weights sum to 100 ML Engineering Lead
Security Scan Zero critical or high-severity vulnerabilities; no API keys exposed IT Security Lead
Audit Trail 100% of actions logged with timestamp, user ID, decision state Compliance Officer
Human-in-Loop All overrides documented; no batch auto-advance observed CHRO / QA Lead

Test Data Strategy

  • Kaggle Dataset: 24 job categories, 200K+ candidates for representative testing
  • Synthetic Resumes: Team-generated with controlled demographic variations for bias testing
  • Edge Cases: Career gaps, job-switching, multiple languages, industry transitions
  • Real Anonymized Data (Post-Launch): Validate on organization's historical hiring data

Continuous Validation (Post-Launch)

  • Daily: Monitor bias metrics dashboard. Alert on drift.
  • Weekly: Manual spot-check of 5–10 scored resumes.
  • Monthly: Review override patterns. Escalate if systematic bias detected.
  • Quarterly: Formal bias audit. Report to leadership and board.
  • Annually: Independent third-party audit. Public compliance report.

Live Application

🚀 FairHire AI MVP

URL: https://fairhire-production-4782.up.railway.app/

Features: Resume upload, job screening, bias detection, audit logs, compliance export

Status: Live on Railway.app

Code & Documentation

📦 GitHub Repository

URL: https://github.com/zyrirena/FairHire.git

Contains: Full source code, deployment config, README with setup instructions

Governance & Technical Documentation

📄 Technical Specifications Sheet

Purpose: Complete tech stack, data model, API endpoints, and architecture diagrams

⚖️ Governance Framework

Purpose: Roles, responsibilities, control architecture, and escalation procedures

📊 Bias Audit Report

Purpose: Quarterly bias testing results, disparate impact analysis, compliance attestation

📋 Model Inventory Card

Purpose: SR 11-7 style documentation of AI model (Claude 3 Haiku), training data, performance metrics

🔐 Data Privacy & Security Plan

Purpose: PII handling, encryption, access controls, data deletion procedures (GDPR/CCPA compliant)

Compliance Artifacts

  • NYC Local Law 144 Compliance Report: Annual bias audit results, algorithm transparency notice
  • EU AI Act Documentation: Technical documentation, risk assessment, governance procedures
  • EEOC Adverse Impact Testing: Disparate Impact Ratio results, 4/5ths rule analysis
  • Audit Defense Package: Complete logs, test results, policy documentation for regulatory inquiries

Project Milestones

✓ MVP Development

Core application with resume screening, bias detection, audit logging. Code on GitHub. Deployed on Railway.

✓ Governance Framework

Complete documentation of roles, controls, policies, and escalation procedures.

✓ Bias Testing & Validation

Formal bias audit, disparate impact analysis, sign-off from Data Science and Legal.

📋 Compliance Certification

Independent audit of NY 144 and EU AI Act compliance. Public compliance report (future).

Task Status & Assignments

Status Task Assigned To Deadline
✓ Done Set up Repository Irena Austin 2026-04-20
⏳ Pending Model Training & Bias Testing Irena Austin 2026-04-20
⏳ Pending Regulatory Compliance Review Jacob Cossaboon 2026-04-20
✓ Done Documentation & Project Report Bryden Dahl 2026-04-20
✓ Done MVP Application Deployment Irena Austin 2026-04-20
⏳ Pending Governance Framework Documentation Jacob Cossaboon 2026-04-20
⏳ Pending Final Technical Specification Bryden Dahl 2026-04-20
✓ Done Test & Validation Irena Austin 2026-04-20

Group Members

Irena Austin

Role: Lead Engineer & Data Science
Responsibilities: Repository setup, model training, bias testing, MVP deployment, validation
Expertise: Python, ML, data pipeline design

Jacob Cossaboon

Role: Compliance & Governance Lead
Responsibilities: Regulatory compliance review, governance framework, legal analysis
Expertise: Employment law, AI governance, risk assessment

Bryden Dahl

Role: Documentation & Architecture
Responsibilities: Technical specifications, project report, architecture diagrams, design documentation
Expertise: Systems design, technical writing, architecture

Project Contact

Team Name: Risk Ready (Group 4)
Project: FairHire AI
Status: MVP complete, governance framework in progress
Target Completion: April 2026