Group Members

Group Members: Irena Austin, Jacob Cossaboon, Bryden Dahl

Project Files

Resource Name Link to Documentation
FairHire AI Test Application Visit Posit
FairHire Repository View Repository
any other link Get PDF

Problem Context

Target Entity and Stakeholders

System Scope

Risk Framing

Operating Model and Governance Design

Control Architecture and Policy-to-Control Translation

Implementation Artifacts

Resource Name Link to Documentation
Specifications Sheet View
New Front Page View
Add more View

Validation and Assurance Design

Tool Integration and Justification

Data Plan, Risks, and Fallback

Task allocation

Status Task Assigned To Due Date
[x] Set up Repository Irena 2026-04-20
[ ] Model Training Irena 2026-04-20
[ ] Regulatory Compliance Jacob 2026-04-20
[x] Documentation Bryden 2026-04-20
[x] test Irena 2026-04-20
[ ] test Irena 2026-04-20
[ ] test Jacob 2026-04-20
[x] test Bryden 2026-04-20


1. Title: FairHire AI Project
2. Problem Context: Our AI agent solves multiple problems in a recruiting environment.

Operational: it helps the business screen through hundreds of resumes quickly to pull minimum qualified candidates to review, it reviews education, experience and minimum requirements of the position. Compliance: The AI agent includes data testing to provide bias review and ensure compliance with NY 144 law and compliance with EU AI Act.


3. Target entity and use case: The target entity for our application is a business or organization that has an in-house Recruitment Office and screens applicants.
Stakeholders:
The application includes multiple stakeholders due to legal requirements of the review.
Regulatory and legal stakeholders who would verify that organization is following the regulation and penalize if violations are found. Since the organization is located in NY we included the:
NYC Department of Consumer and Worker Protection (DCWP) that enforces Local Law 144. To meet that requirement the AI agent will post the annual bias audit results.
EU AI Act Regulators to enforce: Since using AI agents in recruitment is “High-Risk,” these regulators require technical documentation, logging, and human oversight.
Independent Auditors - to comply with NY Law 144 organization must use independent auditors to complete annual testing requirements.

Internal stakeholders in the organization, for our genAI application it would be the
HR Specialists to ensure human-in -he loop and the Equal Employment Opportunity rights aren’t violated.
Legal Office to ensure all regulatory compliance was adhered, they will also be able to ensure that any regulatory changes are communicated to the design team and HR Specialists.


Development Team in the organization ensure the genAI application runs correctly and to fix any bugs as they arise.
Software Engineers to ensure that the PII scrub works correctly, they will be also responsible for updating the Claude/Gemini API prompts and verifying the database is scrubbed for 120-day retention
IT Security to ensure no database leakage

Application Users:
Job candidates - they will be the ones providing their resumes with PII to the HR Department and will receive the notification with their rights to alternate (human review) and notice that review includes genAI
Hiring managers as they will be receiving resumes that initial review includes genAI screen and explanation why this person is a match.


4. System Scope: We designed a MVP application that we were able to upload to Railway and setup userinterface to test it.
System boundary Model: will be running on Claude version: claude-3-haiku-20240307
Data: the system will include user uploaded resumes in a .docx or .pdf formats;
job descriptions that will include each positions required skills, experience, education, and certifications which will be stored in jobs table
Initial test data from Kaggle
Test resumes system generated for bias testing
Workflow: file upload with mime/size validation → text extraction via pdf-parse or mammoth → PII scrubbing across 10 regex pattern categories → scrub validation → database persistence. Screening is then triggered separately: the anonymized text and job criteria are sent to Gemini, the structured JSON response is validated for prohibited content, scores are computed across four weighted dimensions (skills 40 pts, experience 35 pts, education 15 pts, certifications 10 pts), and results are stored with the full prompt logged.
AI Agents:
Bias detection agent - generates six synthetic resumes with identical qualifications but varying demographic name signals, submits all six to Gemini, computes the Disparate Impact Ratio using the EEOC four-fifths rule, and stores results with an alert flag if DIR falls below 0.80.
Selection rate monitor - continuous screening after each evaluation or anomaly
Audit logging agent - to create audit_logs for compliance
Users: AI agent will have assigned roles for administrators for bias testing, compliance PDF export, audit log access, and dataset loading; HR Recruiters to upload job descriptions, create new jobs, screen resumes, correct ratings/override ratings; applicants to upload resumes, delete their profile, and submit questions.
Human review points:
consent gate applicant must click “I consent” before resume is accepted; the recruiter override- to meet the human in the loop requirement the AI agent will require the recruiter to decide whether to advance, hold, reject, or escalate to human review,
Applicant appeal - allow users to appeal AI decision so that recruiter can review and respond to the appeal
bias alert review - when AI detects thresholds are exceeded the admin must manually review job criteria before continued use
Outputs:
The FairHire AI outputs will include:
Hiring Manager report with resumes that were recommended after HR Specilaist approval and certification
Applicant list with scores
Bias audit report
NYC 144 Law compliance PDFs and exports of artifacts in JSON format
5. Risk framing:
Design Stage
Regulatory Misalignment Risk: Misinterpreting requirements from NYC Local Law 144 and EU AI Act which could lead to compliance concerns.
Ensuring that bias concerns are addressed from design stage.
Build Stage
Using test resumes to verify bias testing.
Representative Risk: Scraped resumes may not represent current or real market standards.
Verify data for historical discrimination biases.
Validate Stage
Independent testing on data which should include bias, PII scrub, compliance requirements.
Deploy Stage
Legal Liability Risk: Incorrect scoring could expose penalties or bring lawsuits.
Ensure data privacy.
Prompt injection through resume scan.
Operate Stage
Application needs to be monitored for Audit Failure Risk: Missing logs or being behind in audit. Even worse, losing data can create compliance problems.
Monitoring for data drift.
Evolve
Ensure the compliance department communicates any regulatory changes to the technical team so that they can update how the system works, and set updated guardrails.

Risk Management
Bias Audit
Ensure Data Integrity
Remove Proxy Variables
Ensure PII Data is Secure
Prompt Injection Security
Data Drift

6. Data Plan: Database Used: Resume Dataset from Kaggle https://www.kaggle.com/datasets/snehaanbhawal/resume-dataset
Initial data set was pulled from Kaggle: 💼 Resume Screening Dataset (200K Candidates) and https://www.kaggle.com/datasets/snehaanbhawal/resume-dataset

Our test data was pulled from Kaggle, it contains over 2400 pdf with present categories are HR, Designer, Information-Technology, Teacher, Advocate, Business-Development, Healthcare, Fitness, Agriculture, BPO, Sales, Consultant, Digital-Media, Automobile, Chef, Finance, Apparel, Engineering, Accountant, Construction, Public-Relations, Banking, Arts, Aviation. The dataset was compiled by Sbhawal, who scrapped individual resume examples from www.livecareer.com website. He presented Web Scrapping code in his Github Repo (https://github.com/Sbhawal/resumeScraper). Additional resumes and job descriptions will be built by the team as the testing progresses.


7. Tool stack: | Resource Name | Link to Documentation | | :— | :— | | FairHire AI Test Application | Visit Posit | | FairHire Repository | View Repository | | any other link | Get PDF |


8. Implementation plan:
Implementation and Validation Plan: Our implementation plan includes creating a FairHireAI application. To ensure compliance we will include governance artifacts that will be included in the build, we will provide the SR 11-7-style model inventory, a risk-tiering algorithm, a bias testing dashboard, and an automated release gate workflow to ensure no deployment until benchmarks are met. The benchmarks we currently set for ourselves: PII scrubber of 100% Bias testing - 0.80 impact ratio, if any of protected groups falls below 0.8 the release will be blocked. Security scan to ensure there is no API leakage Audit trail - ability to review decisions made by FairHireAI Human in the loop - add Human Resource Specialist/Recruiter confirmation that all resumes were reviewed and agreed with or overwritten to confirm accuracy and human oversight.

9. Validation plan:
10. Deliverable and milestones:
Our genAI test application: FairHire AI
FairAI application github repository: FairHire AI Repository



11. Role Allocation:
12. Risks and fallback plan:

summary(cars)
##      speed           dist       
##  Min.   : 4.0   Min.   :  2.00  
##  1st Qu.:12.0   1st Qu.: 26.00  
##  Median :15.0   Median : 36.00  
##  Mean   :15.4   Mean   : 42.98  
##  3rd Qu.:19.0   3rd Qu.: 56.00  
##  Max.   :25.0   Max.   :120.00

Project Title

Problem Context

Target Entity

Add visuals here:

Note that the echo = FALSE parameter was added to the code chunk to prevent printing of the R code that generated the plot.