Educational guide

Criminal defense AI use has to stay inside lawyer-supervised ethics workflow.

AI guidance from the ABA, state bars, and courts is converging around familiar duties: competence, confidentiality, supervision, communication, fees, and accuracy. Criminal defense software should make those review points visible rather than implying AI output is self-validating.

Direct answer

AI can assist a defense workflow, but it cannot own the judgment.

Current AI ethics guidance generally does not ban generative AI. It requires lawyers to understand the tool, protect confidential information, supervise outputs, communicate when required, avoid false filings, and charge reasonably. For criminal defense, those duties are sharper because discovery, witness facts, sealed matters, plea strategy, immigration consequences, and investigation work can all involve sensitive client information.

Regulatory framework

The ethics framework is old duties applied to new tools.

The current-state AI guidance checked for this page points to duties lawyers already know, with AI-specific implementation pressure.

Competence includes technology understanding

ABA Model Rule 1.1 and state guidance require lawyers to understand enough about the tool's benefits, limits, data handling, and error patterns to use it responsibly.

Confidentiality is the first criminal-defense constraint

Client facts, discovery, witness information, sealed records, and strategy notes should not be placed into unvetted tools without confidentiality, consent, and vendor-review analysis.

Supervision applies to AI-assisted output

AI output is not a substitute for lawyer review. Citations, summaries, pleadings, discovery indexes, client communications, and sentencing materials need human verification before use.

Court rules may add filing obligations

New York Part 161 and New Jersey court guidance show that court-facing AI rules can sit alongside bar ethics duties. The matter file should track court-specific AI instructions where they apply.

Procedure walkthrough

Put AI review points inside the defense matter.

AI ethics workflow should be visible before an output becomes a filing, client communication, or strategy artifact.

01

Classify the AI use

Separate administrative drafting, summarization, discovery indexing, legal research, client communication, witness preparation, and court filing support. Different use cases create different review and disclosure questions.

02

Flag confidential inputs

The file should show whether the proposed AI use involves client identity, discovery, witness facts, sealed records, privileged notes, or third-party confidential data.

03

Require output verification

A defense workflow should distinguish raw AI output from attorney-reviewed output, cite-checked output, client-ready material, and court-ready material.

04

Track jurisdiction-specific guidance

California, Florida, Texas, North Carolina, New York, New Jersey, DC, Maryland, and Pennsylvania sources show that guidance evolves. The matter should record which jurisdictional source was reviewed.

05

Preserve billing and communication review

If AI materially changes time spent, fee descriptions, client instructions, or informed-consent posture, the billing and communication record should stay near the work.

Local variation

State guidance is evolving and uneven.

A multi-state defense firm should not assume one bar opinion answers every jurisdiction.

California and Florida

California practical guidance and Florida Opinion 24-1 both frame AI through professional duties, including competence, confidentiality, supervision, communication, and fees.

Texas and North Carolina

Texas Opinion 705 and North Carolina 2024 Formal Ethics Opinion 1 both emphasize lawyer responsibility for understanding, supervising, and reviewing AI-assisted work.

New York and New Jersey courts

New York Part 161 and New Jersey court AI guidance are court-system signals. Defense teams should track court-facing AI obligations separately from internal drafting policy.

National baseline

ABA Formal Opinion 512 is a useful national baseline, but it does not displace state rules, local court orders, or client-specific confidentiality instructions.

Implementation check

AI implementation needs review gates, not marketing claims.

A defensible AI workflow makes lawyer responsibility easy to inspect.

01

Use AI-use tags

Tags for no AI, administrative AI, legal research AI, discovery summary AI, client communication AI, court filing AI, and prohibited AI help separate risk levels.

02

Keep vendor and confidentiality notes

For tools outside the firm's approved stack, the matter should show who reviewed terms, data handling, confidentiality, and client instructions.

03

Separate raw and reviewed output

Raw AI drafts should not occupy the same status as attorney-verified filings, cite-checked memos, or client-ready advice.

04

Test criminal-defense examples

Evaluation should use discovery summaries, sealed matter notes, sentencing facts, and motion drafts to test whether ethics review stays visible.

Practitioner review limits

AI ethics decisions remain lawyer-owned.

Legal Core can organize review status and source references. It does not decide whether an AI use is ethically permissible.

01

The workflow can hold review context

Criminal defense AI ethics workflow can be represented as source references, checklists, matter tags, responsible owners, review status, and delivery notes. Competence, confidentiality, supervision, communication, fee, disclosure, and court-filing decisions remain lawyer-reviewed.

02

Primary authority controls the file

ABA, state bar, court, client, and local-rule guidance control the file. The implementation record should identify the source that controlled the decision rather than relying on a generic national template.

03

Escalation belongs in the matter record

A useful system shows when a file was escalated to an attorney, licensed agency owner, regulator-facing reviewer, or court-filing reviewer. It should not hide legal review inside freeform notes.

04

Migration and training need sample files

Before cutover, teams should test ordinary, sensitive, high-volume, and exception-heavy files. Those samples reveal whether the workflow preserves the regulatory and procedural context that actually matters.

Butler workflow relevance

Legal Core can make AI review visible without claiming AI judgment.

Legal Core can track AI-use notes, confidentiality flags, source references, attorney review, cite-check status, filing status, client communication status, and migration review. It does not claim autonomous legal analysis or AI-driven ethics compliance.

Related Butler pages

AI and criminal defense workflow links

FAQ

Criminal defense AI ethics FAQ

Is this AI ethics guide legal advice?

No. It is educational workflow guidance for practitioners evaluating software and implementation records. State law, court rules, regulator instructions, ethics duties, privilege analysis, filing decisions, and evidence-use decisions remain practitioner-reviewed.

Can Butler decide whether a particular AI use is ethically permissible?

No. Legal Core can track source references, review status, responsible owners, evidence or document context, and implementation notes. It does not make legal, regulatory, ethics, filing, or admissibility determinations.

Why does this AI ethics page link to state and city pages?

Cross-cutting workflow only becomes useful when it is tied to actual jurisdictions. The linked geography pages show the state, county, court, licensing, or bail-market context that controls implementation in real practice.

How should a firm use this guide during a software evaluation?

Build a demo from real files: one ordinary matter, one sensitive or regulated matter, one multi-jurisdiction matter, one migrated source-system file, and one practitioner-review handoff. The evaluation should test whether the system keeps sources, responsibility, status, and limits visible.

Does Butler claim direct court, regulator, or licensing integration from this guide?

No. These pages describe firm-side workflow organization. Direct court filing, licensing submission, regulator reporting, source-system export, and official record changes must be separately scoped and validated before they are represented as product behavior.

Where should a practitioner go next after reading this AI ethics guide?

Start with Legal Core and the linked state legal pages, then review the linked state, city, pricing, migration, and related educational pages that match the firm's actual jurisdictions and vertical.

Sources checked

AI ethics sources checked

Sources combine ABA model-rule guidance, ABA Formal Opinion 512 coverage, state bar guidance, court AI guidance, and professional conduct rules current at build time.

Next step

Evaluate Legal Core with a reviewed AI-use scenario.

Bring one discovery summary, one draft motion, one sealed matter, and one client communication into the evaluation so AI review fields can be tested against actual criminal defense work.