Separate raw output from reviewed output
A draft summary, motion outline, or discovery note generated with AI should not be treated as court-ready or client-ready until attorney review is recorded.
Butler framework
AI ethics guidance is converging around familiar duties: competence, confidentiality, supervision, client communication, and accuracy. Criminal defense software should make those duties easier to inspect.
Thesis
AI-augmented criminal defense software can help organize drafts, summaries, discovery review, intake notes, and workflow context while keeping attorney review visible. AI-automated software that implies it can make legal judgments, supervise itself, or bypass confidentiality review creates a different risk profile. Butler's framework starts with supervision and source visibility, not AI autonomy.
Context
ABA Formal Opinion 512 and state guidance from California, Florida, Texas, North Carolina, New York, New Jersey, DC, and other jurisdictions generally apply existing professional duties to new AI tools. Lawyers still need competence. Confidential information still needs protection. Nonlawyer and technology-assisted work still needs supervision. Court-facing use may trigger local instructions or certification rules.
Criminal defense raises the stakes because the data is sensitive: discovery, witness information, sealed records, privileged notes, client communications, immigration consequences, sentencing facts, and investigator work. The software design question is not whether AI is exciting. It is whether the product preserves review, confidentiality, and accountability.
The best AI design also avoids forcing a firm into one philosophical position. Some firms will adopt AI for administrative summaries first. Others will use it only for internal drafting. Some will avoid it entirely until more guidance develops. The product should support all three postures.
This is why AI ethics is not only a policy document. It shapes interface decisions: whether output is labeled, whether source material is visible, whether review status is required, whether court-facing use is separated from internal notes, and whether administrators can disable workflows that do not fit firm policy.
Practitioner implications
Ethics-aware design should make lawyer supervision operational.
A draft summary, motion outline, or discovery note generated with AI should not be treated as court-ready or client-ready until attorney review is recorded.
The matter should show when an AI workflow touches client identity, discovery, sealed material, witness facts, or privileged analysis.
State bar opinions and court rules evolve. The file should preserve the jurisdictional source or internal policy that controlled the AI-use decision.
Ethics-aware software should let a firm disable or avoid AI-assisted workflow without breaking ordinary matter management.
Internal summarization, client communication, and court filing support should not share one status. Court-facing use may require additional review or jurisdiction-specific tracking.
A matter record should make it possible to see what was generated, what was changed, who approved it, and whether it was ever used outside the firm.
Butler point of view
Butler's approach is to treat AI as workflow support that remains inside practitioner control. That means visible source references, review status, confidentiality flags, attorney ownership, cite-check posture, and client communication context. It does not mean autonomous legal advice or unsupervised filing decisions.
This is a product philosophy, not a prediction that every firm will adopt AI immediately. Some defense lawyers are right to move slowly. The system should support careful adoption, no adoption, and future adoption as ethics guidance and firm policy mature.
The design test is whether a reviewer can reconstruct what happened: what input was used, what output was produced, who reviewed it, what source or policy controlled the decision, and whether the output was ever delivered outside the firm.
That design test is intentionally modest. Butler should not market AI as a replacement for criminal defense judgment. The more credible claim is narrower: a well-designed system can help practitioners keep AI-assisted work inside the same supervision, confidentiality, and matter-review discipline they already owe clients.
Limits
A credible AI position has to acknowledge where restraint is the better design.
Current bar and court guidance may change. Software should be built so AI review policy can change without rewriting the entire matter model.
A firm without vendor review, internal policy, or attorney supervision capacity may be better served by disabling AI-assisted workflows.
Any workflow that bypasses lawyer review for legal analysis, client advice, court filings, or evidence use is outside Butler's practitioner framework.
If a proposed AI workflow cannot satisfy the firm's confidentiality and vendor-review requirements, the right answer may be not to use it.
The product can make review visible, but the firm still needs a policy for who may use AI, for what work, with what data, and under which jurisdictional constraints.
Related Butler pages
Sources checked
Sources support the professional-responsibility and court-guidance examples named in the post.
Next step
The better design question is not how much AI a product can advertise. It is how clearly the product preserves attorney review, confidentiality, and accountability.