AI SDLC Framework
This framework defines how Stratpoint engineers are expected to incorporate AI tools into the software development lifecycle — responsibly, consistently, and with clear ownership.
Overview
AI tools accelerate every phase of software development. Our goal is to capture those gains while maintaining code quality, security, and accountability. This framework is a living document — as tools evolve, so will our guidance.
Core principle: The engineer owns every output. AI is a collaborator, not a replacement for engineering judgment.
AI at Each SDLC Phase
Planning
AI can accelerate requirements analysis, story writing, and effort estimation.
| Activity | Approved Use |
|---|---|
| Story refinement | Use AI to identify ambiguities and suggest acceptance criteria |
| Effort estimation | Use AI to generate analogous estimates; validate with domain knowledge |
| Risk identification | Use AI to surface common failure modes for the proposed approach |
Ground rule: Never share client-confidential requirements verbatim with public AI tools. Anonymize or paraphrase.
Design
AI can suggest architecture patterns, review proposed designs, and generate diagrams.
| Activity | Approved Use |
|---|---|
| Architecture review | Use AI to challenge assumptions and identify failure modes |
| ADR drafting | Use AI to draft the structure; engineer fills in context and decision |
| API design | Use AI to suggest consistent naming and RESTful patterns |
Ground rule: AI-generated architecture must be reviewed by at least one senior engineer before adoption.
Development
AI is most impactful here — autocompletion, code generation, debugging assistance.
| Activity | Approved Use |
|---|---|
| Boilerplate generation | Use AI to scaffold components, tests, and config files |
| Debugging | Use AI to explain errors and suggest fixes |
| Refactoring | Use AI to suggest cleaner patterns; review diffs carefully |
| Code explanation | Use AI to understand unfamiliar codebases |
Ground rule: Every line of AI-generated code must be understood and reviewed before commit. Do not ship code you cannot explain.
Testing
AI can generate test cases, identify edge cases, and help write test data.
| Activity | Approved Use |
|---|---|
| Unit test generation | Use AI to generate initial test stubs; review for correctness |
| Edge case discovery | Use AI to suggest boundary conditions and failure scenarios |
| Test data generation | Use AI to create anonymized, realistic fixture data |
Ground rule: Test coverage generated by AI must be validated against real requirements — not just against itself.
Deployment
AI can assist with infra-as-code, CI/CD pipeline authoring, and runbook generation.
| Activity | Approved Use |
|---|---|
| IaC generation | Use AI to scaffold Terraform / CDK modules; security-review all outputs |
| CI/CD pipelines | Use AI to suggest pipeline steps and optimizations |
| Runbook generation | Use AI to draft operational runbooks from architecture context |
Ground rule: Never pass production credentials, connection strings, or secrets into AI context.
Review
AI can assist in code review, documentation generation, and postmortem analysis.
| Activity | Approved Use |
|---|---|
| Code review assistance | Use AI to flag common issues before human review |
| Documentation | Use AI to generate docstrings and API docs from code |
| Postmortem drafting | Use AI to structure timeline and contributing factors |
Ground rule: AI review is a supplement, not a substitute, for human peer review.
Ground Rules for AI-Assisted Development
- You own the output. Regardless of how it was generated, every line in a commit is your responsibility.
- No secrets in context. Never paste API keys, passwords, or production data into AI tools.
- No client data. Anonymize all customer or client information before using it as AI context.
- Review everything. Read AI-generated code as carefully as any other code under review.
- Disclose AI use when relevant. If a significant portion of a PR was AI-generated, note it in the description.
- Use approved tools only. See the approved tools list below.
Approved Tools Per Phase
| Phase | Approved Tools |
|---|---|
| Planning | ChatGPT (anonymized input), Claude, Jira AI features |
| Design | ChatGPT, Claude, Excalidraw AI, Mermaid via AI |
| Development | GitHub Copilot, Cursor, Claude Code, Codeium |
| Testing | GitHub Copilot, Claude, ChatGPT |
| Deployment | GitHub Copilot, Claude |
| Review | GitHub Copilot PR review, Claude |
This list is maintained by the Engineering Leadership team. Propose additions via the #engineering-standards Slack channel.