Penetration Testing
Rules of Engagement
A mature penetration test starts with clarity: what weโre testing, how weโre testing it, and how we keep your environment safe.
1) Scope Definition
- In-scope assets: domains, IP ranges, applications, APIs, cloud accounts, identity providers.
- Environments: production vs staging vs dedicated test environments.
- User roles: number of authenticated roles, permission tiers, and workflows.
- Explicit exclusions: third-party systems, shared services, and any no-test assets.
2) Safety Constraints
- DoS restrictions: Denial-of-service actions are excluded unless explicitly contracted and approved.
- Rate limits: traffic shaping and request rate caps to avoid service disruption.
- Data minimization: we avoid collecting sensitive data unless necessary for proof; evidence is minimized.
- Change windows: agreed testing windows aligned to your operational risk tolerance.
3) Communications & Escalation
- Primary contacts: business and technical points of contact.
- Escalation path: who to call if we suspect active compromise or high-risk exposure.
- Cadence: daily update option, end-of-test readout, and remediation workshop option.
- Incident-safe posture: immediate stop-and-notify protocol when needed.
4) Credential Handling
- Secure transfer: agreed method for providing test credentials (vault, secure channel).
- Rotation: optional forced rotation after engagement.
- Least privilege: accounts should match realistic roles; admin access only if required and approved.
- Deletion: credentials removed from tester systems at engagement completion (per agreement).
5) Evidence & Retention
- Evidence standard: screenshots/log excerpts sufficient for reproduction and proof.
- Retention timeline: agreed retention period and secure deletion policy.
- Confidentiality: NDA-ready handling; data shared only with authorized stakeholders.
6) Retest Policy (Included)
- Included retest: one retest within the agreed window after fixes are deployed.
- Validation output: updated status and validation notes for remediated items.
- Scope control: retest validates agreed fixes; new scope requires separate approval.
7) AI Augmentation Disclosure
- Human verification: every finding is validated by a human tester.
- Disclosure: AI assistance is disclosed where it contributes to artifacts.
- Data handling: no sharing of sensitive customer data with public models without written approval.
- Opt-out: AI usage constraints can be agreed for regulated environments.