Gradient Descent for Finance

LinuxToaster · March 2026 · linuxtoaster.com
Financial services runs on regulated text. SEC filings. Loan agreements. Compliance policies. Audit workpapers. Risk disclosures. Model documentation. Every one of these documents is iteratively refined against published rules — rules written by regulators who will fine you if you get them wrong.

The gradient descent pattern — toast for intelligence, ito for reversible history, jam for loops — fits finance the way it fits biopharma: the loss functions are explicit, the stakes are high, and the audit trail isn't optional.

Why Finance Fits

Financial regulation is prescriptive. Reg S-K tells you what goes in an SEC filing and how to say it. Basel III tells you how to document your capital models. SOX tells you what controls to document and how to evidence them. FINRA tells you how to supervise communications. The rules aren't vague — they're paragraph-level specific.

This is the same advantage biopharma has: the loss function is published. When the persona says "review this against Reg S-K Item 303," there's no ambiguity about what "better" means. The standard exists. The AI can check against it.

The other fit: financial documents are high-volume and high-consequence. A bank doesn't produce one compliance policy — it produces hundreds. A fund doesn't file one disclosure — it files quarterly. The gradient descent loop scales across a document library the same way it scales across a codebase.

SEC Filings

10-Ks, 10-Qs, 8-Ks, proxy statements. Regulation S-K and S-X define the structure and content requirements. The MD&A section alone has generated more SEC comment letters than any other part of a filing.

# .persona
You are an SEC disclosure specialist. You review filings
against Regulation S-K and S-X requirements and recent SEC
staff guidance. Focus on: completeness of required disclosures,
consistency between narrative and financial statements,
specificity of risk factors, and MD&A analytical rigor.
One improvement per round. Read .crumbs. DONE when the
filing would survive an SEC comment letter review.

# .tools
ito
cat
grep
wc
🍞 12 times toast 10k-mda.md "improve — Reg S-K Item 303, specificity, no boilerplate"
a2f3b4c  revenue discussion says "increased due to market conditions" — add specific drivers, quantify contribution
b4d5e6f  liquidity section missing contractual obligations table required by Item 303(a)(5)
c6e7f8a  risk factor #3 is generic industry risk — rewrite to be company-specific per SEC plain English guidance
d8f9a0b  critical accounting estimates section doesn't disclose sensitivity ranges — add quantitative thresholds
e0a1b2c  forward-looking statements lack meaningful cautionary language — boilerplate safe harbor is insufficient
f2b3c4d  segment discussion doesn't reconcile to Note 14 in financial statements — add bridge
a4c5d6e  non-GAAP measure "Adjusted EBITDA" missing reconciliation to nearest GAAP measure
b6d7e8f  new accounting standard adoption (ASC 326) impact disclosed as "immaterial" — quantify or justify
c8e9f0a  cybersecurity risk factor needs updating per new Item 106 requirements
d0f1a2b  executive compensation discussion references peer benchmarking but doesn't identify the peer group
DONE after 10 rounds

Every one of these is the kind of deficiency that shows up in an SEC comment letter six weeks after filing — when the stock is already trading on the disclosure. Each round catches one gap before the staff reviewer does.

Loan Documentation

Credit agreements, term sheets, intercreditor agreements. Thousands of defined terms, cross-references, and conditions that must be internally consistent. One undefined term in a 200-page credit agreement can blow up in workout.

# .persona
You are a banking lawyer reviewing credit documentation.
You check for: defined term consistency, cross-reference
accuracy, covenant tightness, basket interactions, and
standard market protections. You compare against LMA/LSTA
standard forms where applicable. One issue per round.
Read .crumbs. DONE when the document is closeable.

# .tools
ito
cat
grep
wc
🍞 10 times toast credit-agreement.md "tighten — defined terms, cross-refs, covenant gaps"
a3d4e5f  "Material Adverse Effect" definition missing carve-out for industry-wide changes — standard market
b5e6f7a  financial covenant section references "Consolidated EBITDA" but definition includes non-cash addbacks with no cap
c7f8a9b  permitted investments basket cross-references Section 7.03(f) which doesn't exist — should be 7.03(e)
d9a0b1c  change of control definition doesn't capture indirect transfers — add beneficial ownership language
e1b2c3d  restricted payments basket allows dividends up to "Available Amount" but Available Amount builder has no starting date
f3c4d5e  anti-hoarding provision missing — unrestricted subsidiary designation could move assets beyond creditor reach
a5d6e7f  LIBOR fallback language still references LIBOR — update to SOFR with CSA spread adjustment
b7e8f9a  mandatory prepayment sweep percentage steps down at leverage ratio that isn't tested at same frequency
c9f0a1b  assignment provision allows assignment to disqualified lender list — but list isn't attached as exhibit
DONE after 9 rounds

The cross-reference to a nonexistent section. The undefined starting date. The LIBOR language that should have been updated two years ago. These are the findings that surface at 2 AM the night before closing — or worse, during enforcement. Each round finds one before it matters.

Compliance Policies

AML/KYC policies. Trading compliance manuals. Information barrier procedures. Regulators don't just want you to be compliant — they want you to document compliance. And they want the documentation to be specific, current, and enforceable.

# .persona
You are a compliance officer reviewing internal policies
against regulatory requirements (BSA/AML, FINRA Rules,
Dodd-Frank, MiFID II as applicable). Focus on: regulatory
alignment, specificity of procedures, escalation clarity,
recordkeeping requirements, and enforceability. Vague policies
fail exams. One improvement per round. Read .crumbs. DONE
when the policy would survive a regulatory examination.

# .tools
ito
cat
grep
wc
🍞 10 times toast aml-policy.md "tighten — BSA/AML, FinCEN guidance, no vague procedures"
a1c2d3e  CDD section says "obtain sufficient information" — replace with specific data elements per CDD Rule
b3d4e5f  SAR filing threshold referenced as $5,000 but policy covers MSB activity — threshold is $2,000
c5e6f7a  enhanced due diligence triggers list missing PEP screening — add per FFIEC manual
d7f8a9b  transaction monitoring section doesn't specify review timeframes — add "within 5 business days of alert generation"
e9a0b1c  CTR aggregation rule described incorrectly — must aggregate across all accounts, not just same-day same-account
f1b2c3d  escalation path ends at "senior management" — name the role: BSA Officer or designee
a3c4d5e  beneficial ownership threshold stated as 10% — should be 25% per CDD Rule (unless exchange-listed exemption applies)
b5d6e7f  recordkeeping section missing retention periods — add 5-year minimum per 31 CFR 1010.430
DONE after 8 rounds

A policy that says "obtain sufficient information" fails an OCC exam. A policy that says "collect full legal name, date of birth, address, and SSN/TIN for all natural persons; verify against documentary or non-documentary methods per CDD Rule §1010.230(b)" passes. Each round replaces one vagueness with one regulatory citation.

Model Risk Documentation

SR 11-7 and SS1/23 (Bank of England) require that every model used for decision-making has documentation covering development, validation, limitations, and ongoing monitoring. Model risk management is a documentation problem.

# .persona
You are a model risk analyst. You review model documentation
against SR 11-7 (Federal Reserve) and OCC 2011-12 requirements.
Focus on: conceptual soundness, limitations disclosure,
assumptions documentation, performance monitoring thresholds,
and validation evidence. One finding per round. Read .crumbs.
DONE when the documentation would pass model validation review.

# .tools
ito
cat
grep
🍞 10 times toast credit-scoring-model-doc.md "review — SR 11-7, completeness, limitations honesty"
a2e3f4a  model limitations section lists only data limitations — add conceptual limitations of logistic regression for this use case
b4f5a6b  development sample described but holdout/validation sample split not documented — add ratio and methodology
c6a7b8c  feature selection rationale missing — 47 variables used but no explanation of why these and not others
d8b9c0d  performance metrics show only AUC — add KS statistic, Gini coefficient, and calibration metrics per validation standards
e0c1d2e  monitoring section says "model will be monitored" — specify metrics, frequency, and trigger thresholds for re-validation
f2d3e4f  stress testing results not included — add performance under adverse macroeconomic scenarios
a4e5f6a  override policy documented but override rate tracking not specified — regulators will ask for this
b6f7a8b  vendor model components identified but not decomposed — SR 11-7 requires same rigor for vendor models
DONE after 8 rounds

Model risk examiners read these documents looking for exactly these gaps — undocumented assumptions, missing validation evidence, vague monitoring plans. A model that performs well but is poorly documented can still get a "needs improvement" rating. Each round closes one documentation gap that an examiner would flag.

Audit Workpapers

PCAOB standards require that audit documentation support the conclusions reached and be understandable to an experienced auditor with no prior connection to the engagement. Inspection findings routinely cite insufficient documentation.

# .persona
You are an audit quality reviewer. You review workpapers
against PCAOB AS 1215 (Audit Documentation) and relevant
auditing standards. Focus on: sufficiency of evidence
documented, linkage between procedures and conclusions,
specificity of testing described, and standalone
comprehensibility. One improvement per round. Read .crumbs.
DONE when the workpaper would survive PCAOB inspection.

# .tools
ito
cat
grep
🍞 8 times toast revenue-testing-wp.md "improve — AS 1215, evidence sufficiency, conclusion support"
a1d2e3f  conclusion states "revenue is fairly stated" but workpaper doesn't document extent of testing — add sample size and selection method
b3e4f5a  substantive procedure references tolerable misstatement of $2M but no documentation of how threshold was set
c5f6a7b  inquiry of management documented but no corroborating evidence — add cross-reference to contract inspection results
d7a8b9c  cutoff testing described as "performed" — document specific invoices tested, dates, and amounts
e9b0c1d  journal entry testing section missing — required per AS 2401 for fraud risk procedures
f1c2d3e  management representation letter not cross-referenced — add as supporting evidence for key estimates
a3d4e5f  prior year carry-forward conclusions referenced but not re-evaluated for current year conditions
DONE after 7 rounds

PCAOB inspection reports cite "insufficient documentation" more than any other deficiency category. A workpaper that says "tested revenue cutoff — no exceptions" fails. A workpaper that documents what was tested, how it was selected, what was found, and how the conclusion follows from the evidence passes. Each round adds the specificity that survives inspection.

Risk Disclosures

Fund offering documents, prospectuses, risk factor sections. FINRA, the SEC, and prudential regulators all want risk disclosures that are specific, not aspirational. Generic risk factors are worse than useless — they create a false sense of compliance.

# .persona
You are a securities disclosure attorney. You review risk
factors and offering document disclosures for specificity,
materiality, and regulatory compliance. Generic risk factors
that could apply to any company are failures — every risk
must be tied to this entity's specific circumstances. One
improvement per round. Read .crumbs. DONE when disclosures
are specific and defensible.

# .tools
ito
cat
grep
wc
🍞 8 times toast risk-factors.md "sharpen — entity-specific, material, no boilerplate"
a2c3d4e  "we may be subject to litigation" — rewrite: disclose the three pending matters and quantify exposure range
b4d5e6f  concentration risk factor is generic — add that top 3 clients represent 47% of revenue, name the sector
c6e7f8a  regulatory risk factor references "changing regulations" — specify Dodd-Frank Section 619 impact on prop trading revenue
d8f9a0b  cybersecurity risk factor missing despite two incidents disclosed in 8-K last year — add and cross-reference
e0a1b2c  interest rate risk disclosure says "we may be affected" — quantify: 100bp shift impacts NII by approximately $12M
f2b3c4d  key person risk factor doesn't name individuals or explain concentration — add CEO/CIO dependency and succession status
a4c5d6e  geopolitical risk factor mentions "international operations" — specify: 31% revenue from jurisdictions under OFAC scrutiny
DONE after 7 rounds

"We may be subject to litigation" is not a risk disclosure. It's a sentence that exists in every prospectus ever filed. "We are currently defendants in three actions with aggregate claimed damages of $45M" is a risk disclosure. Each round converts one piece of boilerplate into one piece of substance.

The Compliance Advantage

Finance has the same advantage as pharma: regulators want trails. Every examination, every inspection, every enforcement action starts with "show me the documentation." The gradient descent pattern produces the trail as a byproduct.

ito history
Audit Trail

Every change logged with intent. Content-addressed. Immutable. Hashed. When the examiner asks who reviewed what and when — it's here.

.crumbs
Review Rationale

What was found, what was fixed, what's still open. Maps to supervisory review requirements under FINRA Rule 3110.

.persona
Methodology

The review criteria, explicitly stated. Version-controlled. Reproducible. Examinable. The same persona on the same document type produces consistent review quality.

Compare this to the current state: a shared drive with 200 policies in various stages of review, tracked changes from six people, version confusion between "Q4 draft" and "Q4 final," and an email chain somewhere that constitutes the approval. The gradient descent approach gives you better documents and a process that's inherently auditable.

Scaling Across the Firm

A mid-size bank might have 400 compliance policies, 50 model documents, quarterly SEC filings, and thousands of loan documents. The pattern scales because each document is independent.

# Annual policy review — all compliance policies
for policy in policies/*.md; do
  🍞 5 times toast "$policy" "review — current regulations, no stale references, enforceable"
done

# Pre-filing review of 10-K
for section in 10k-draft/*.md; do
  🍞 8 times toast "$section" "review — Reg S-K, internal consistency, quantitative specificity"
done

# Loan portfolio documentation QC
for doc in loan-docs/2026-Q1/*.md; do
  🍞 3 times toast "$doc" "check — defined terms, cross-refs, missing exhibits"
done

# Model inventory documentation refresh
for model in models/*.md; do
  🍞 6 times toast "$model" "review — SR 11-7, limitations, monitoring thresholds"
done

The ito history across the entire policy library becomes the evidence of your annual review cycle. When the OCC examiner asks "how do you ensure policies are current?" — you show them the history.

What It Doesn't Replace

The AI doesn't know your client base. It doesn't know your risk appetite. It doesn't make judgment calls about materiality or business strategy. It doesn't replace the general counsel, the chief risk officer, or the engagement partner.

What it does: the mechanical compliance work. Checking that every required disclosure exists. Flagging stale regulatory references. Tightening vague procedures into specific ones. Catching the defined term that's used but never defined, the cross-reference that points to the wrong section, the monitoring threshold that was never specified.

The human makes the decisions. The AI handles the rigor. The trail documents both.

Getting Started

# .persona for your domain — pick one, tune it
You are a [compliance officer | disclosure attorney | model
risk analyst | audit quality reviewer]. You review [document
type] against [specific regulation/standard]. One improvement
per round. Read .crumbs. DONE when [quality threshold].

# .tools — keep it minimal
ito
cat
grep
wc
# Start with a document you know has issues
$ cd compliance-review && ito init
$ cp ~/policies/aml-policy-v3.md .

# Pair mode first
$ toast
> review this AML policy against current FinCEN guidance

# Then let it run
🍞 8 times toast aml-policy-v3.md "tighten — BSA/AML, FinCEN, specific and enforceable"
$ ito history

Read the trail. Financial regulation has a useful property: when the AI flags something, you can look up the specific rule it's referencing. The feedback loop is tight. The standard is external. The document either meets it or it doesn't.

Finance has spent decades complaining about the burden of regulatory documentation. The rules aren't going away. But the mechanical work of checking documents against those rules — that's gradient descent. The regulators already wrote the loss function. Run the loop.

Keep me in the loop

Product updates, new features, the occasional blog post. No spam.