What should happen when a developer submits a pull request?
We follow a process shaped by real-world lessons—each checkpoint exists because of issues we've encountered and solved. The goal is simple: ensure that new code works reliably in production. No one wants to discover a critical bug on a Friday afternoon right before logging off.
It's all about balance. Minor changes shouldn't trigger endless reviews, but we need enough checks to avoid late-night emergencies. As projects scale, so does the impact. Some sites are mission-critical, and stability becomes non-negotiable.
Here's what a typical pull request should look like on an enterprise project. We tailor versions of this process for most of our clients.
Time and acceptance criteria
Every pull request should start with a well-defined ticket. Developers need to understand what they're building and why it matters. For example, our tickets typically include:
- User story – Describes what a specific user (e.g., author or site visitor) wants to do and why. Format: A [user type] wants to [do something] in order to [achieve a goal].
- Background/motivation – Explains the purpose of the work and the problem it solves. We often link to related conversations from tools like ServiceNow, Zendesk, or TeamDynamix.
- Acceptance criteria – Clear, testable requirements that define when the work is "done" to ensure alignment between developers, reviewers, and QA.
- Visual documentation – Screenshots, mockups, or screen recordings that show the issue and the expected outcome.
Writing great tickets takes practice, but it pays off in the end. Even after issuing over 6,000 tickets on a multi-year government project, we continue to refine the process. Clear tickets save time, reduce confusion, and improve outcomes across the team.
Developer review and QA process
After development is complete, the developer submits a pull request, which kicks off a multi-step review process. A typical Lullabot pull request includes:
- Link to the original ticket – Ensures a clear connection between the reported problem and the proposed solution.
- Testing instructions – Step-by-step directions for QA to confirm that the fix or feature works as intended. These often expand on the original ticket's QA notes.
- Follow-up tasks – Note any additional actions, such as notifying a specific stakeholder, identifying the relevant production domain, or flagging someone who needs to review the change (e.g., an agency author affected by the update). The developer may also indicate if a related service ticket needs to be marked as "in progress."
- Code review notes – Highlights anything reviewers should be aware of, such as config changes, database updates, or specific areas of concern.
This structure maintains a clear, consistent, and easy-to-follow review process for both humans and ticketing tools.
Code review
For large projects, each pull request typically requires two peer reviews before moving forward. This helps catch issues early and ensures the code meets quality and consistency standards.
During review, developers assess:
- Code quality and adherence to standards
- Security vulnerabilities
- Performance impact
- Maintainability
- Project-specific architecture decisions records (ADRs)(ex. Lullabot ADRs)
- Behavior with real content
Testing with real content
Most stakeholders and QA team members don't live on GitHub and may not have technical backgrounds. That's why testing in a realistic environment is critical.
We use Tugboat to spin up a preview site for every pull request automatically. Tugboat creates a complete clone of the production site with the proposed changes using live content. This allows stakeholders to see and test new features without needing technical tools or risking the live site.
QA and stakeholders can validate changes visually and functionally by comparing the preview side by side with the production version. It also helps uncover edge-case issues that might not appear in local testing.
Automated checks
Enterprise projects require robust automation. Every pull request triggers a suite of automated tests that have saved us countless times during deployments. These typically include:
- Accessibility checks with Lighthouse, where a minimum score should be maintained
- Functional tests with Playwright to catch regressions
- Security scanning using ZAP, with customizable thresholds (e.g., flag "high" risk vulnerabilities only or "medium" and above)
We also run automated content quality checks, such as:
- Absolute URL detection – Prevents hardcoded links that break in different environments
- Orphaned content checks – Identifies content that's unreachable or unlinked
- Alt text auditing – Ensures images meet accessibility standards
- Document reviews – Zips and surfaces all uploaded files for external authors to audit or archive
The merge and release process
Once peer reviews and automated checks are passed, the pull request is ready to merge, but that doesn't mean we push it straight to production.
Enterprise projects benefit from scheduled releases, which group changes into planned deployments. This allows time for integration testing and reduces risk. We also avoid high-risk periods, such as Friday afternoons or major public events.
Release schedules vary. One client merges on Tuesdays and deploys on Thursdays. Others push 2–3 times a week during active development. If the cadence is slower, that's okay! What matters is consistency.
Process over tools
We use tools that cater to our clients—those that prioritize accessibility, reliability, and predictability. But the tools are secondary. What matters most is the discipline of process.
Here are some guiding principles for building workflows:
- Clear requirements and acceptance criteria
- Multiple layers of review (automated and human)
- Realistic testing environments with real content
- Predictable deployment schedules
The larger and more critical the project, the more this process pays off—with fewer surprises, less stress, and more restful weekends.
Pro tip from six years of hard-earned experience: never deploy on a Friday.