Skip to Content
AI SDLC Framework

AI SDLC Framework

This framework defines how Stratpoint engineers are expected to incorporate AI tools into the software development lifecycle — responsibly, consistently, and with clear ownership.

Overview

AI tools accelerate every phase of software development. Our goal is to capture those gains while maintaining code quality, security, and accountability. This framework is a living document — as tools evolve, so will our guidance.

Core principle: The engineer owns every output. AI is a collaborator, not a replacement for engineering judgment.

AI at Each SDLC Phase

Planning

AI can accelerate requirements analysis, story writing, and effort estimation.

ActivityApproved Use
Story refinementUse AI to identify ambiguities and suggest acceptance criteria
Effort estimationUse AI to generate analogous estimates; validate with domain knowledge
Risk identificationUse AI to surface common failure modes for the proposed approach

Ground rule: Never share client-confidential requirements verbatim with public AI tools. Anonymize or paraphrase.

Design

AI can suggest architecture patterns, review proposed designs, and generate diagrams.

ActivityApproved Use
Architecture reviewUse AI to challenge assumptions and identify failure modes
ADR draftingUse AI to draft the structure; engineer fills in context and decision
API designUse AI to suggest consistent naming and RESTful patterns

Ground rule: AI-generated architecture must be reviewed by at least one senior engineer before adoption.

Development

AI is most impactful here — autocompletion, code generation, debugging assistance.

ActivityApproved Use
Boilerplate generationUse AI to scaffold components, tests, and config files
DebuggingUse AI to explain errors and suggest fixes
RefactoringUse AI to suggest cleaner patterns; review diffs carefully
Code explanationUse AI to understand unfamiliar codebases

Ground rule: Every line of AI-generated code must be understood and reviewed before commit. Do not ship code you cannot explain.

Testing

AI can generate test cases, identify edge cases, and help write test data.

ActivityApproved Use
Unit test generationUse AI to generate initial test stubs; review for correctness
Edge case discoveryUse AI to suggest boundary conditions and failure scenarios
Test data generationUse AI to create anonymized, realistic fixture data

Ground rule: Test coverage generated by AI must be validated against real requirements — not just against itself.

Deployment

AI can assist with infra-as-code, CI/CD pipeline authoring, and runbook generation.

ActivityApproved Use
IaC generationUse AI to scaffold Terraform / CDK modules; security-review all outputs
CI/CD pipelinesUse AI to suggest pipeline steps and optimizations
Runbook generationUse AI to draft operational runbooks from architecture context

Ground rule: Never pass production credentials, connection strings, or secrets into AI context.

Review

AI can assist in code review, documentation generation, and postmortem analysis.

ActivityApproved Use
Code review assistanceUse AI to flag common issues before human review
DocumentationUse AI to generate docstrings and API docs from code
Postmortem draftingUse AI to structure timeline and contributing factors

Ground rule: AI review is a supplement, not a substitute, for human peer review.

Ground Rules for AI-Assisted Development

  1. You own the output. Regardless of how it was generated, every line in a commit is your responsibility.
  2. No secrets in context. Never paste API keys, passwords, or production data into AI tools.
  3. No client data. Anonymize all customer or client information before using it as AI context.
  4. Review everything. Read AI-generated code as carefully as any other code under review.
  5. Disclose AI use when relevant. If a significant portion of a PR was AI-generated, note it in the description.
  6. Use approved tools only. See the approved tools list below.

Approved Tools Per Phase

PhaseApproved Tools
PlanningChatGPT (anonymized input), Claude, Jira AI features
DesignChatGPT, Claude, Excalidraw AI, Mermaid via AI
DevelopmentGitHub Copilot, Cursor, Claude Code, Codeium
TestingGitHub Copilot, Claude, ChatGPT
DeploymentGitHub Copilot, Claude
ReviewGitHub Copilot PR review, Claude

This list is maintained by the Engineering Leadership team. Propose additions via the #engineering-standards Slack channel.

Last updated on