Find your model's failure modes before someone else does

I help AI teams stress-test production LLMs — probing for prompt injection, jailbreaks, data extraction, and behavioral drift — and deliver a clear report of what's exploitable and how to fix it.

Engagements

Each engagement is scoped to your model, deployment context, and risk profile. All findings are delivered as a written report with reproduction steps and mitigations.

Red team assessment

A structured adversarial evaluation of your LLM — prompt injection, jailbreak coverage, data extraction vectors, and behavioral edge cases.

From $5,000 / project

Advisory & scoping

Hourly advisory for teams that need a technical sounding board — threat modeling, eval design, or pre-launch risk review.

$200 / hr
S
Sean Yunt Founder & Principal

Hi, I'm Sean

20+ years in software quality engineering, most recently as QA Manager at Providence Digital Innovation Group where I led adversarial testing for Grace — a patient-facing OpenAI-based chatbot handling appointment booking, prescription management, and symptom checking in a HIPAA-regulated environment.

I started Black Diamond Consulting to do focused, independent adversarial testing — without the overhead of a large firm and without compromising on rigor.

Engagement intake

Fill this out and I'll follow up within 2 business days. I'll send an NDA before our first call if needed.