Independent Applied Research Lab

Illuminating complexity. Building what's missing.

We investigate overlooked problems, validate them rigorously, and turn the results into open, auditable tools that work in the real world.

Lab Signals

What's new

Recent launches, milestones, and research updates from across the lab.

Fair reranking library now on PyPI — 99 KB, zero dependencies, Apache 2.0.

Brought adverse impact ratio from 0.77 to 0.92 with three lines of code.

AI-assisted prior auth routing — currently in testing with healthcare partners.

Exploring privacy-preserving ad measurement without third-party cookies.

Bayesian community-biased retrieval — 1000x faster than LLM-based Graph RAG, validated on MIMIC-IV clinical data.

Haske Labs site is live

Update

Open portfolio of projects, research, and tutorials — all in one place.

In practice

Fixing a biased ranking system — with three lines of code

COMPAS is a tool used by courts across the U.S. to score how likely someone is to re-offend. In 2016, ProPublica found it was unfair: Black defendants were consistently scored as higher risk than white defendants with similar backgrounds.

The fairness score (called the Adverse Impact Ratio) was 0.77. Anything below 0.80 is considered failing. We ran governed-rank on the same data — no rebuilding the system, just three lines of code — and brought that score up to 0.92.

0.77
Before
governed-rank
0.92
After

0.000.00

Fairness score — from failing to passing

0%

Original ranking accuracy kept intact

3 lines

of code to fix it

The actual code

from governed_rank import GovernedRanker
ranker = GovernedRanker(protected="race", method="orthogonal")
fair_ranking = ranker.fit_transform(scores, sensitive_attrs)

governed-rank removes the unfair signal from the scores, locks the rankings the system is most confident about, then produces a final result that's fair. Every person in the list gets an audit trail — a clear record of what changed and why. That matters in courts, hiring, lending, and anywhere else decisions about people need to be explainable.

Projects

Research translated into applied impact

Interesting problems, rigorous solutions — open-source tools and published research you can build on.

Applied MLReranking & Policy OptimizationOpen Source · March 2026

governed-rank: Governed Reranking for Any Domain

Steer ranked lists toward any policy objective — fairness, safety, fraud triage, content moderation — without breaking accuracy. Three-step pipeline: orthogonalize, protect, project. 99 KB, minimal dependencies.

⚡ 0.77→0.92 Fairness (COMPAS) • 10x Fraud Triage • 100% RAG Safety

Apache 2.099 KBpip install
View case study
Applied ML / Decision TheorySelective Prediction & UncertaintyarXiv · March 2026

Confidence Gate Theorem: When Should Ranked Systems Abstain?

When should a ranked system skip a decision instead of guessing? Two cheaply testable conditions predict whether confidence-based abstention will help or hurt. The key determinant: is your uncertainty from missing data, or from a changing world?

⚡ 0 violations Clinical Triage • Clean curve Cold-Start • 4.9x E-Commerce Lift

arXiv 2603.099473 Domains7 Datasets
View case study
Applied MLAd Tech & PersonalizationValidated · March 2026

Cookieless Personalization: Session-Level Intent Without Tracking

A complete pipeline for ad and recommendation personalization that uses only session-level signals — no cookies, no persistent user IDs, no cross-site tracking. Three components work together: IntentLens detects what the session wants, the Confidence Gate decides when to trust that detection, and governed-rank steers the ranking without degrading relevance. Validated across 3 public datasets.

⚡ 4.9x RetailRocket • 4.5x Yoochoose • 1.9x Criteo

3 Public DatasetsUp to 4.9x LiftSub-5ms Latency
View case study
Clinical OperationsHealthcare AIValidated · March 2026

Confidence-Gated Prior Authorization: Automating Healthcare Triage at the Pathway Level

A confidence-gated triage system for prior authorization that detects care pathways from diagnosis and procedure codes, measures confidence in that detection, and routes requests to the appropriate review level — auto-approve, nurse review, or physician review. Validated on 10,000 real MIMIC-IV hospital encounters with 38.3% auto-approve rate and zero monotonicity violations.

⚡ 38.3% Auto-Approve Rate • 71.6% HIGH Confidence • 5.0x Pathway Lift

MIMIC-IV Validated10K Encounters151/151 Tests Pass
View case study
AI SafetyFairness & GovernanceResearch · Howard University Collaboration

GBP-Audit: Safety-First Bias Correction for AI Models

Safety-first bias correction that knows when not to act. Geometric coherence distinguishes proxy bias from task-aligned bias, five guardrails prevent harmful interventions, and governance packets provide full audit trails. 3 of 7 datasets safely corrected, 4 correctly abstained. Zero accuracy degradation.

⚡ 7 Datasets Evaluated • 3 of 7 Safe Corrections • up to 73% Disparity Reduction

Published · IEEE CogMI 20255 Guardrails7 Datasets
View case study
Applied ML / Information Retrieval / Healthcare NLPRetrieval-Augmented Generation & Agentic SystemsBenchmark validated · March 2026

Graph RAG Without the LLM: Bayesian Community-Biased Retrieval with Calibrated Abstention

Every Graph RAG system today requires an LLM in the retrieval loop — for entity extraction, community summarization, and deciding when to retrieve more. We show that a Bayesian inference pipeline can serve as the entire Graph RAG stack: soft community detection, structured retrieval, and agentic decision-making. On MIMIC-IV clinical data, community-biased retrieval achieves 63.8% precision lift over cosine-only search on adversarial queries, the agentic loop upgrades 87.5% of uncertain cases to high confidence, and the system runs at 2.65ms — not 3 seconds.

⚡ +63.8% Retrieval Lift • 87.5% Tier Upgrades • 2.65ms Latency

MIMIC-IV2604 Clinical Features3461 ICD-10 Codes
View case study

How we work

Research-first, then ship it

Every project starts with a real problem and a literature review — and ends with something you can install.

Research-first development

Co-authoring studies, validating hypotheses, and carrying scientific rigor through every build sprint.

Lab-to-launch velocity

Turning prototypes into open tools people can actually use — without cutting corners on rigor.

Built-in accountability

We design for transparency — audit trails, bias checks, and clear documentation are part of the process, not afterthoughts.

Insights

Latest from the lab

8 articles — case studies, tutorials, and deep dives across our research.

Research

Following interesting problems wherever they lead

Active investigations spanning AI safety, decentralized systems, and human-machine collaboration.

Our current investigations span foundational and applied research—advancing both the principles and practice of responsible systems.

Active

AI & Applied Data

Reranking, Fairness, Predictive Analytics

Algorithms for governed reranking, bias audits, and fairness — with open-source code and reproducible results.

Current:governed-rank
Impact:4 domain tutorials, open-source on PyPI
In Development

Blockchain & Decentralization

Privacy-Preserving Protocols, DeFi

Privacy-preserving protocols and cryptographic primitives for secure data collaboration across distributed networks.

Current:Healthcare Data Consortium
Impact:Active research area
Research Phase

Education & Learning Systems

Adaptive AI, Human-Machine Collaboration

Adaptive learning environments and co-creative AI — exploring how machines and people learn better together.

Current:Personalized Learning Engine
Impact:2 universities engaged
Active

Cybersecurity & Ethics

Risk Assessment, Responsible AI

Studying how systems fail — and building auditable safeguards that keep AI accountable as it scales.

Current:AI Safety Auditing
Impact:100% detection rate

Collaboration Briefing

Have an interesting problem?

We're always looking for the next hard question. Whether it's a research collaboration, a dataset that needs better tools, or an idea worth exploring — reach out at ronald@haskelabs.com.