Every December, people open tiny cardboard doors for chocolate, toys, and small surprises.
This year, we’re opening something more powerful:
25 days of prompts to level up how you think, build, and lead with AI.
The Advent of Prompts is a structured, 25-day series designed to sharpen how you frame problems, design prompts, and deliver outcomes with modern AI systems.
Each day gives you:
- A focused challenge
- A carefully designed prompt
- A core skill it builds
- A stretch goal if you want to go further
Some days sharpen logic. Others amplify creativity. A few will completely rewire how you think about prompting. All of them are fast, practical, and built for people doing real work.
Think of it as a holiday workout program for your AI brain:
- No running shoes required
- Curiosity recommended
- Hot chocolate optional
If you’re a product leader, engineer, architect, strategist, designer, analyst, or anyone trying to make AI actually useful in your work, this series is for you.
Use it however you like:
- Open one challenge per day, like a traditional advent calendar
- Or binge all 25 prompts in a single weekend and come back to your favorites
Either way:
By Day 25, you won’t just “use AI” — you’ll run it with intent.
CHALLENGES
DAY 1 — Prompt Decomposition
Challenge: Break a complex, ambiguous request into a structured, multi-layer prompt that guarantees predictable behavior.
Prompt:
“Rewrite the following request into a multi-layer instruction set that includes system rules, steps for planning, steps for execution, and a validation checklist. Include a rationale for each structural choice.”
Skill: Prompt architecture
Stretch Goal: Produce an improved version that separates deterministic and non-deterministic tasks.
❄️ SOLUTION
Solution Prompt
System Rules
You are a Staff Prompt Architect supporting a CTO / CPO preparing executive materials.
Follow these rules:
- Optimize for clarity, concision, and decision-readiness — the output should be safe to drop into an exec deck with minimal editing.
- Separate facts from judgment: keep deterministic, data-based work isolated from interpretation and recommendations.
- Make your structure reusable as a prompt template, not a one-off response.
- Label all assumptions explicitly; treat them as risk surfaces, not hidden shortcuts.
- Include a short rationale for your structural choices so the prompt can be improved over time.
Input Request
“Create a summary of the Q4 Product Review Meeting notes and prepare recommendations for leadership.”
Context Data
From the Q4 Product Review:
- Engineering velocity decreased by 8% versus prior quarter.
- Client onboarding time improved by 12%.
- There were two production outages impacting EMEA clients.
- The AI assistant prototype exceeded initial adoption targets.
- Leadership is concerned about scaling delays going into next fiscal year.
Planning Steps
- Parse the request into distinct task types: summary, analysis, recommendations.
- Identify the minimum structure needed for an exec-ready artifact (headings, sections, level of detail).
- Classify each task as deterministic (fact-based, context-bound) or non-deterministic (judgment, strategy, interpretation).
- Identify missing information and explicitly document assumptions required to proceed.
Execution Steps
Deterministic Execution
(Use only facts from context)
- Produce a structured summary using the following fixed sections:
- Performance Highlights
- Key Issues
- Risks & Exposure
- Metrics Snapshot
- Populate each section only with information stated in the context.
- Create an Assumptions & Gaps list that captures anything inferred or missing (clearly labeled).
Non-Deterministic Execution
- Generate 3–5 strategic recommendations for executive leadership, explicitly tied to the facts in the summary.
- Prioritize recommendations using Impact / Urgency tags.
- List Likely Leadership Questions that this summary will trigger (to pre-empt follow-ups)..
Validation Checklist
Before finalizing, verify:
- All metrics and events are traceable directly to the provided context.
- Assumptions are explicitly labeled and separated from facts.
- Summary and recommendations are distinct sections and not blended.
- Output fits on one page (or a single LinkedIn post) without losing key meaning.
- Structural elements are generic enough to reuse for the next QBR or product review.
🎁 Example Output (Running the Prompt)
Q4 Product Review — Executive Summary
1. Performance Highlights
- Client onboarding time improved by 12% quarter-over-quarter, indicating meaningful gains in process efficiency and/or tooling impact.
- The AI assistant prototype exceeded initial adoption targets, signaling strong user pull for AI-augmented workflows and a viable path to scale this capability.
2. Key Issues
- Engineering velocity decreased by 8% vs. prior quarter, which may constrain roadmap delivery if the trend persists.
- There were two production outages impacting EMEA clients, underscoring reliability and resilience gaps in that region.
- Leadership remains concerned about scaling delays heading into the next fiscal year, particularly as demand and product surface area expand.
3. Risks & Exposure
- Client Trust Risk (EMEA)
Repeated or prolonged outages could erode trust with EMEA clients, complicate renewals, and increase the need for commercial concessions. - Execution Risk (Velocity)
A sustained 8% velocity decline may impact delivery of strategic initiatives, especially those tied to revenue, retention, or regulatory commitments. - Growth Risk (Scaling Delays)
If scaling constraints are not resolved, the organization may be unable to onboard new clients or roll out new capabilities at the pace required by the business.
4. Metrics Snapshot
- –8% engineering velocity vs. prior quarter
- +12% improvement in client onboarding time
- 2 production outages impacting EMEA clients
- AI assistant prototype: exceeded initial adoption targets (no absolute metrics provided)
Assumptions & Gaps
(Explicitly labeled — not to be confused with facts)
- Assumption: The 8% velocity decline is not driven by a one-off event (e.g., a major migration, re-platforming, or release freeze), as no such context is provided.
- Assumption: The 12% improvement in onboarding time reflects durable process and tooling changes rather than temporary staffing increases or short-term workarounds.
- Gap: No information on outage duration, root cause, number of clients impacted, or SLA breach details.
- Gap: No quantitative data on AI assistant impact (e.g., hours saved, issue resolution time improvement, ticket deflection).
- Gap: No explicit description of the current scaling bottlenecks (architecture, infrastructure, process, or third-party dependency).
Strategic Recommendations for Leadership
Each recommendation is explicitly tied to the facts above and tagged for Impact / Urgency.
- Launch a Focused EMEA Reliability Program
- Impact: High Urgency: High
- Establish a dedicated initiative to address EMEA reliability: root-cause analysis for both outages, remediation plan, and clear SLOs/SLA alignment.
- Require regular reporting on incident trends, time-to-detect, and time-to-recover to the executive team until stability improves.
- Elevate Scaling to a Named, Cross-Functional Program
- Impact: High Urgency: High
- Treat scaling delays as a first-class program with a single accountable owner, clear milestones, and resourcing across product, engineering, and infrastructure.
- Tie program success criteria directly to FY targets (e.g., max time to onboard a new large client, concurrency thresholds, regional expansion goals).
- Diagnose and Reverse the Velocity Decline
- Impact: Medium–High Urgency: Medium
- Perform a focused diagnostic on the 8% velocity drop: tech debt, environment instability, context switching, process friction, or staffing mix.
- Fund a short list of high-leverage interventions (automation, CI/CD improvements, test coverage, or scope management) and track velocity weekly for the next quarter.
- Scale the AI Assistant with Clear Business Outcomes
- Impact: Medium Urgency: Medium
- Move from prototype success to structured pilots with defined cohorts and KPIs (e.g., tickets resolved per agent, time saved per workflow, NPS impact).
- Use pilot data to build a case for broader rollout as a platform capability embedded into core journeys (onboarding, support, operations).
- Create an Integrated Q1 Executive Dashboard
- Impact: Medium Urgency: Medium
- Build an integrated dashboard tracking velocity, reliability (by region), onboarding time, and AI assistant adoption to provide a single view of execution health.
- Use this as the main instrumentation layer for monthly and quarterly reviews, reducing ad-hoc reporting cycles.
Likely Leadership Questions
To reduce back-and-forth and support faster decision-making, expect questions such as:
- Velocity: What are the top 2–3 root causes of the 8% velocity decline, and what specific actions will we take in Q1 to correct it?
- Reliability: Are the two EMEA outages isolated incidents or symptoms of a broader architectural or operational issue? What is the timeline to materially reduce incident risk?
- Scaling: What are the key constraints driving scaling delays (architecture, infra, vendor, process), and what trade-offs are we making if we do not address them this fiscal year?
- AI Assistant: Beyond adoption, what measurable business outcomes has the AI assistant delivered so far, and what is the investment required to scale it responsibly?
- Portfolio View: How do these items (velocity, reliability, scaling, AI) interact? Where should we sequence investment to avoid solving one problem at the expense of another?
Rationale for Structural Choices
- Four core sections (Highlights, Issues, Risks, Metrics) map cleanly to how executive teams scan updates: what’s working, what’s not, where we’re exposed, and what the numbers say.
- Assumptions & Gaps are isolated to make risk surfaces explicit and auditable; this prevents misinterpretation when content is forwarded or reused in decks.
- Recommendations with Impact/Urgency tags enable quick prioritization in leadership forums and make it easy to build an action-oriented slide.
- Likely Leadership Questions anticipate the next conversation, reducing meeting churn and enabling better-prepared discussions.
DAY 2 — Precision Role Assignment
Challenge: Assign roles to the model that influence reasoning, expertise level, and quality bar.
Prompt:
“Define a role for the model that includes expertise level, decision authority, risk tolerance, and communication style. Then instruct the model to ask clarifying questions based on its assigned role.”
Skill: Role shaping
Stretch Goal: Create a role hierarchy for multi-step workflows.
❄️ SOLUTION
---
> **NOTE: This is an example prompt. Replace the task with your own scenario as needed.**
### Example Prompt – Multi-Role AI for Enterprise Cloud Migration
You are a **multi-role AI team** working on this task:
> **Task (Example):**
> Help an enterprise IT organization plan and execute a cloud migration and modernization of a legacy line-of-business application currently running on-prem.
---
#### 1. Primary Role Definition
Adopt the primary role of a **Lead Cloud Transformation Partner** and briefly define:
* **Expertise Level:** Principal enterprise & cloud architect (10+ years), experienced in large-scale migrations, security, and IT operating models.
* **Decision Authority:**
* Can recommend migration strategies, target architectures, and sequencing.
* Must present options (with trade-offs) when choices materially change risk, cost, or compliance posture.
* **Risk Tolerance:** Balanced – conservative for production cutover, security, and compliance; pragmatic elsewhere.
* **Communication Style:** Executive-ready, structured, concise; uses headings and bullets; addresses CIO and senior IT leaders.
Output a short paragraph plus a bulleted summary of this role.
---
#### 2. Role Hierarchy for the Workflow
Define a role hierarchy you will switch between:
1. **Cloud Strategy Lead (Strategist)**
* Mission: Frame goals, constraints, and success metrics.
* Risk: Medium; business- and outcome-focused.
2. **Principal Cloud Architect (Specialist/Architect)**
* Mission: Design migration strategy and target architecture.
* Risk: Balanced; robust, secure, and operable solutions.
3. **Risk & Quality Officer (Risk/Compliance)**
* Mission: Stress-test plan for security, reliability, compliance, and operational readiness.
* Risk: Low; highlights blockers and mitigations.
4. **CIO Communications Partner (Communicator)**
* Mission: Package the plan into a CIO-ready roadmap and summary.
* Risk: Medium; simplifies without distorting risk or feasibility.
For each role, list: **Name & Mission, Expertise Level (1 line), Decision Authority (1–2 bullets), Risk Tolerance, Communication Style (1 line).**
---
#### 3. Workflow
State how you’ll use these roles step by step:
1. **[Cloud Strategy Lead]** Problem framing and alignment to business outcomes.
2. **[Cloud Strategy Lead]** Ask clarifying questions.
3. **[Principal Cloud Architect]** Propose migration strategy and target architecture.
4. **[Risk & Quality Officer]** Identify risks, assumptions, and mitigations.
5. **[CIO Communications Partner]** Produce a concise CIO-ready migration roadmap and summary.
---
#### 4. Clarifying Questions (Role-Based)
As **Cloud Strategy Lead**, ask **3–7 high-impact questions**, grouped by:
* **Goals & Success:** Primary objectives (cost, agility, resilience).
* **Constraints & Risk:** SLAs, regulatory/compliance regimes, change windows.
* **Context & Landscape:** Existing cloud footprint, identity, logging/monitoring, key integrations.
* **Timeline & Resources:** Target timeline, budget, and internal skills.
If answers are missing, state explicit **assumptions** and proceed.
---
#### 5. Execution
After questions are answered or assumptions are stated:
1. Work through the steps, labeling each section with the active role, e.g.:
* **[Cloud Strategy Lead] Problem Framing**
* **[Principal Cloud Architect] Migration Strategy & Target Architecture**
* **[Risk & Quality Officer] Risks & Mitigations**
* **[CIO Communications Partner] CIO Summary & Roadmap**
2. Ensure consistency with the defined **expertise, authority, risk tolerance, and communication style**.
3. End with a **Final Output** section summarizing:
* Recommended migration strategy and high-level architecture.
* Key rationale and trade-offs.
* Top risks, assumptions, and mitigations.
* 3–5 concrete next actions for the CIO/IT leadership team.
---
🎁 Example Output (Running the Prompt)
**[Lead Cloud Transformation Partner] Primary Role Definition**
I am acting as a **Lead Cloud Transformation Partner**.
I operate as a principal-level enterprise and cloud architect guiding large organizations through complex cloud migrations with a strong emphasis on risk-managed execution and executive-ready communication.
* **Expertise Level**
* Principal enterprise & cloud architect (10+ years).
* Deep experience in large-scale migrations, security, reliability, and IT operating models.
* **Decision Authority**
* Can recommend migration strategies, target architectures, and sequencing.
* Will present options with trade-offs where choices significantly impact risk, cost, or compliance.
* **Risk Tolerance**
* Balanced: conservative for production cutovers, data protection, and compliance; pragmatic and value-focused elsewhere.
* **Communication Style**
* Executive-ready, structured, concise.
* Uses headings and bullets.
* Tailored to CIO, enterprise architects, and senior IT leadership.
---
### 1. Role Hierarchy for the Workflow
**Role 1 – Cloud Strategy Lead (Strategist)**
* **Mission:** Frame goals, constraints, and success metrics for the migration and modernization program.
* **Expertise Level:** Senior IT strategy / enterprise architecture leader.
* **Decision Authority:**
* Can define high-level migration approach and business alignment.
* Escalates major strategic trade-offs (e.g., rehost vs. refactor-only) as options.
* **Risk Tolerance:** Medium – outcome-driven but cautious on business and operational risk.
* **Communication Style:** Outcome-focused, business-friendly, concise.
**Role 2 – Principal Cloud Architect (Specialist/Architect)**
* **Mission:** Design the migration strategy and target cloud architecture.
* **Expertise Level:** Principal-level cloud architect across compute, storage, network, IAM, observability, and CI/CD.
* **Decision Authority:**
* Can specify technical patterns, services, and architecture choices consistent with strategy and constraints.
* Escalates major cost or platform-lock-in trade-offs as options.
* **Risk Tolerance:** Balanced – prioritizes robustness, security, operability, and cost.
* **Communication Style:** Structured, technical but accessible.
**Role 3 – Risk & Quality Officer (Risk/Compliance)**
* **Mission:** Stress-test the plan for security, reliability, compliance, and operational readiness.
* **Expertise Level:** Senior risk, compliance, and SRE reviewer.
* **Decision Authority:**
* Can flag blocking risks and required mitigations.
* Cannot override strategy, but can recommend changes and gates.
* **Risk Tolerance:** Low – strongly oriented toward safety, compliance, and resilience.
* **Communication Style:** Direct, issue- and mitigation-focused.
**Role 4 – CIO Communications Partner (Communicator)**
* **Mission:** Package the plan into a CIO-ready roadmap and summary.
* **Expertise Level:** Senior IT/product leader experienced in C-suite communication.
* **Decision Authority:**
* Can define narrative, structure, and emphasis of the final deliverable.
* Preserves factual accuracy and risk transparency.
* **Risk Tolerance:** Medium – willing to simplify complexity, but not misrepresent risk or feasibility.
* **Communication Style:** Clear, concise, narrative-led, with explicit decisions, benefits, risks, and asks.
---
### 2. Workflow
I will use the roles in this order:
1. **[Cloud Strategy Lead]** Problem framing and alignment to business outcomes.
2. **[Cloud Strategy Lead]** Define high-impact clarifying questions and assumed answers.
3. **[Principal Cloud Architect]** Propose migration strategy and target architecture.
4. **[Risk & Quality Officer]** Identify risks, assumptions, and mitigations.
5. **[CIO Communications Partner]** Produce a concise, CIO-ready roadmap and summary.
---
### 3. Clarifying Questions (with Assumed Answers)
**[Cloud Strategy Lead]**
Below are the key questions I *would* ask. Since we’re not iterating, I’ll state **assumptions** so we can proceed.
**A. Goals & Success**
1. What is the primary objective: cost reduction, agility/speed, resilience, innovation, or regulatory pressure?
* **Assumption:** Primary objectives are **cost optimization** and **improved agility**, with required or desired uplift in **resilience**.
2. What success metrics matter most (e.g., % cost reduction, improved deployment frequency, uptime target)?
* **Assumption:**
* 20–30% infra TCO reduction over 2–3 years.
* 99.9% uptime target.
* Ability to release changes weekly instead of quarterly.
**B. Constraints & Risk**
3. What regulatory/compliance regimes apply (e.g., SOC2, GDPR, HIPAA)?
* **Assumption:** SOC2 and GDPR apply; data residency must remain in specific regions.
4. What are acceptable downtime and change windows for go-live?
* **Assumption:**
* Max 2–4 hours planned downtime for final cutover.
* Weekend or off-hours window available.
**C. Context & Landscape**
5. Do you already have a cloud landing zone and chosen provider?
* **Assumption:** A basic landing zone exists on a single major cloud provider (e.g., AWS/Azure), with central IAM, logging, and networking already established.
6. How tightly integrated is the legacy app with other systems (e.g., batch jobs, shared DB, mainframe)?
* **Assumption:** Moderate integration: shared database with other apps and some nightly batch jobs; no mainframe.
**D. Timeline & Resources**
7. What is the target timeline and internal team capacity?
* **Assumption:**
* Target: 9–12 months to full migration.
* A dedicated core team exists (5–8 engineers + architect + PM) with access to a central platform team.
---
### 4. Execution
#### 4.1 [Cloud Strategy Lead] Problem Framing
**Business Context & Drivers**
* Legacy LOB application currently on-prem, critical to revenue and/or operations.
* Drivers:
* **Cost:** Reduce data center and hardware refresh costs; pay-as-you-go model.
* **Agility:** Faster release cycles and ability to evolve the app.
* **Resilience:** Improve HA/DR posture beyond current on-prem capabilities.
* **Compliance:** Maintain or improve adherence to SOC2/GDPR.
**High-Level Objectives**
* Migrate the application to cloud within **9–12 months**, minimizing business disruption.
* Achieve **99.9% availability**, with DR capability to recover within **RTO 4 hours / RPO 15–60 minutes**.
* Support **weekly release cadence** via modern CI/CD.
* Achieve a **20–30% TCO reduction** over 2–3 years versus a “stay on-prem” baseline.
**Migration Strategy Principles**
* Prefer **phased migration** over big-bang where feasible.
* Use **rehost + incremental replatform/refactor** to balance risk and modernization.
* Leverage existing landing zone and cloud standards (IAM, networking, logging).
* Bake in observability and automation as first-class requirements.
---
#### 4.2 [Principal Cloud Architect] Migration Strategy & Target Architecture
**Overall Migration Approach**
* **Phase 1 – Discover & Assess**
* Detailed inventory of app components, dependencies, data flows, SLAs, and infra.
* Performance baselining and capacity profiling.
* Risk and complexity assessment to validate the phased approach.
* **Phase 2 – Rehost (Lift & Shift) to Cloud**
* Move application workloads largely “as-is” to IaaS or lightly containerized form.
* Keep architecture changes minimal but align with cloud networking, IAM, and logging standards.
* Reduce data center risk and create a foundation for modernization.
* **Phase 3 – Replatform & Partial Refactor**
* Introduce managed services where they materially improve reliability or reduce toil (e.g., managed DB, load balancer).
* Extract selected services (e.g., reporting, batch processing) into separate components or microservices where ROI is clear.
* Implement CI/CD, blue/green or canary deployments, and infrastructure as code.
* **Phase 4 – Optimize**
* Cost optimization (rightsizing, autoscaling, reserved capacity).
* Performance tuning, resilience patterns (circuit breakers, retries, back-off).
* Further refactoring driven by business priorities.
**Target Architecture (Conceptual)**
* **Compute Layer**
* Containerized workloads (e.g., Kubernetes or managed container platform) for app services.
* Possibly some VM-based components during initial rehost, then converge to containers.
* **Data Layer**
* Managed relational database service for primary transactional data.
* Separate analytics/reporting store (e.g., read replica or data warehouse) to offload reporting.
* Encrypted at rest and in transit, with strict IAM-based access controls.
* **Networking & Security**
* Well-defined VPC/VNet topology with private subnets for app and data tiers.
* Centralized ingress via managed load balancer or API gateway.
* Integration with enterprise IdP for SSO and least-privilege access.
* Network segmentation and security groups aligned with zero-trust principles.
* **Observability & Operations**
* Centralized logging (app, infra, audit logs) with search and retention policies.
* Metrics and tracing via managed observability stack.
* Alerts for SLO breaches, error rates, latency, capacity, and cost anomalies.
* Runbooks for common incidents.
* **CI/CD & Infrastructure as Code**
* Git-based workflows with pipeline tooling for build, test, security scans, and deployment.
* Infrastructure as code (e.g., Terraform/Bicep/CloudFormation) for repeatable environments.
* Blue/green or canary deployment patterns for safe releases.
* **Integration**
* Use message bus or integration services where possible to decouple from legacy systems.
* For shared database scenarios, introduce intermediate APIs or data replication as a stepping stone.
**Cutover Strategy**
* Dress rehearsal in a pre-production environment mirroring production.
* Data migration approach:
* Initial bulk load from on-prem DB to cloud DB.
* Change data capture or log shipping until cutover.
* Final cutover during agreed change window with rollback plan to on-prem if critical issues arise.
---
#### 4.3 [Risk & Quality Officer] Risks, Assumptions, and Mitigations
**Key Risks & Mitigations**
1. **Data Migration & Integrity**
* *Risk:* Inconsistent data during cutover, prolonged downtime.
* *Mitigation:*
* Use CDC or log-based replication for near-real-time sync.
* Validate data pre- and post-cutover; run shadow read checks.
* Run at least one full-scale dress rehearsal.
2. **Performance & Latency**
* *Risk:* Cloud deployment underperforms due to under-sizing or changed latency patterns.
* *Mitigation:*
* Conduct performance testing pre-go-live.
* Use autoscaling and proper capacity reserves.
* Place compute and data in regions close to users and dependencies.
3. **Integration Breakage**
* *Risk:* Upstream/downstream systems fail due to changed endpoints, security, or timing.
* *Mitigation:*
* Maintain a detailed integration map and run end-to-end test scenarios.
* Use API gateways and backward-compatible interfaces where possible.
* Communicate changes early with integrator teams.
4. **Security & Compliance Gaps**
* *Risk:* Misconfigurations expose data or violate SOC2/GDPR expectations.
* *Mitigation:*
* Enforce baseline security controls via landing zone and policy-as-code.
* Perform security reviews, threat modeling, and penetration tests.
* Ensure logging, monitoring, and retention meet compliance requirements.
5. **Operational Readiness**
* *Risk:* Ops team not prepared to run and support the new environment.
* *Mitigation:*
* Train Ops and SRE teams; create runbooks and on-call rotations.
* Establish SLIs/SLOs and incident management processes.
* Conduct game days and failure simulations.
**Core Assumptions**
* Existing landing zone meets basic enterprise standards.
* A single cloud provider is used.
* The organization is willing to invest in a dedicated core team and accept a 9–12 month program.
---
#### 4.4 [CIO Communications Partner] CIO Summary & Roadmap
**Executive Summary**
We propose a **phased cloud migration and modernization** of the legacy line-of-business application that balances risk, cost, and agility. The program will rehost the application to the cloud, then progressively replatform and refactor high-value areas, while strengthening security, resilience, and observability.
**High-Level Roadmap (9–12 Months)**
1. **Phase 1 – Discover & Plan (4–6 weeks)**
* Deep assessment of the app, dependencies, data, and SLAs.
* Define success metrics, risk appetite, and target architecture.
* Confirm migration strategy and resourcing.
2. **Phase 2 – Rehost to Cloud (8–12 weeks)**
* Lift & shift core workloads into the existing landing zone.
* Ensure integration with enterprise IAM, logging, and networking.
* Establish baseline observability.
3. **Phase 3 – Replatform & Modernize (12–20 weeks)**
* Migrate to managed DB and enhance resilience (HA/DR).
* Implement CI/CD, IaC, and safer deployment patterns.
* Refactor selected components where ROI is clear.
4. **Phase 4 – Optimize & Evolve (Ongoing)**
* Rightsize resources and optimize cost.
* Further refactor for agility and innovation as business needs evolve.
* Mature SRE practices and operational excellence.
**Benefits**
* Reduced infra and data center costs over time.
* Improved agility and release frequency.
* Stronger resilience and disaster recovery posture.
* A modern platform foundation for future features and services.
**Key Risks & Controls**
* Data migration, performance, integration, and security risks are addressed with rehearsals, observability, and robust governance.
* Operational readiness is ensured via training, runbooks, and SRE practices.
---
### Final Output – Recommendations & Next Actions
**Recommended Strategy**
* Execute a **phased migration**: rehost to cloud first, then replatform and selectively refactor.
* Adopt a **containerized, managed-services-heavy architecture** with strong IAM, observability, and CI/CD.
* Govern the program through clear success metrics, risk controls, and executive sponsorship.
**Key Rationale**
* Phased approach reduces risk while still enabling meaningful modernization.
* Leveraging managed services and IaC improves reliability, security, and speed of change.
* Strong observability and SRE practices protect uptime and support compliance obligations.
**Top Risks & Mitigations (Summary)**
* **Data integrity & downtime:** CDC-based migration, dress rehearsals, rollback plans.
* **Performance:** Capacity planning, autoscaling, performance testing.
* **Integration:** Integration mapping, end-to-end testing, backward-compatible interfaces.
* **Security/compliance:** Landing zone guardrails, security reviews, SOC2/GDPR-aligned controls.
* **Operational readiness:** Training, runbooks, SLOs, on-call processes.
**3–5 Concrete Next Actions for CIO/IT Leadership**
1. **Sponsor a 4–6 week Discovery & Planning phase** with named owner, budget, and cross-functional team (app, infra, security, compliance).
2. **Confirm cloud provider and landing zone standards** and address any gaps before migration.
3. **Establish success metrics and risk guardrails** (uptime, cost targets, compliance requirements, acceptable downtime).
4. **Form a dedicated migration squad** (architect, engineers, PM, security, ops) with clear accountability.
5. **Schedule an executive checkpoint** at the end of Discovery to approve the detailed migration plan and roadmap.
DAY 3 — Guardrails for Correctness
Challenge: Build a soft guardrail system that reduces hallucination risk.
Prompt:
“Before you generate the answer, list assumptions, missing data, hidden constraints, and unresolved ambiguities. Do not generate the final answer until the validation list is complete and internally consistent.”
Skill: Hallucination control
Stretch Goal: Add severity scoring for each assumption.
❄️ SOLUTION
## 1. Validation Phase (no final answer yet)
Before you generate any final answer, create a section called "Validation List" and systematically list:
- Assumptions
- Missing data
- Hidden constraints
- Unresolved ambiguities
- Tool / source limitations (e.g., no web access, incomplete context)
For EACH assumption in the Validation List:
1. Assign a severity score using this rubric:
- 1 – Low: Minor assumption; unlikely to affect overall correctness in a material way.
- 2 – Moderate: Could affect nuances or some parts of the answer, but core guidance likely remains valid.
- 3 – High: If wrong, would significantly change the answer, its safety, or its usefulness.
- 4 – Critical: Central to the answer; if wrong, the answer would likely be misleading, unsafe, or fundamentally incorrect.
2. Provide:
- A short description of the assumption.
- The severity score (1–4).
- A brief rationale for the severity.
- A mitigation plan (e.g., “ask user X”, “offer multiple scenarios”, “avoid specific numbers”, “explicitly mark as speculation”).
Format for each assumption:
- A#:<short name>
- Description: …
- Severity: <1–4> (<Low/Moderate/High/Critical>)
- Rationale: …
- Mitigation: …
Also explicitly list:
- Missing Data: items you would ideally know but do not.
- Hidden Constraints: any implicit constraints you are inferring.
- Unresolved Ambiguities: questions or interpretations that remain open.
- Tool / Source Limits: anything that restricts your ability to verify facts.
Do NOT generate a final answer until:
- The Validation List is complete,
- The assumptions are internally consistent,
- Each assumption has a severity and mitigation.
## 2. Guardrail Logic (what to do with high/critical assumptions)
After building the Validation List:
- If any assumption with Severity 4 (Critical) is central to the user’s request:
- Do NOT guess.
- Prefer to:
- Ask targeted clarifying questions, OR
- Provide only a high-level, conditional answer clearly framed as “If X, then …; if Y, then …”.
- Clearly state what you CANNOT reliably answer without more information.
- If there are assumptions with Severity 3 (High):
- Explicitly surface them in the answer as key caveats.
- Prefer scenario-based or range-based answers over single-point, overly precise claims.
- Avoid fabricating specific facts, metrics, or citations; if you are uncertain, say so directly.
- For Severity 1–2 assumptions:
- You may proceed, but still keep them listed for transparency.
- Where relevant, mention them briefly as caveats.
General hallucination-control rules:
- Do NOT invent:
- Citations, URLs, document titles, or quotes.
- Detailed statistics or dates without clear basis.
- If you lack enough information for a reasonable answer, say so and explain what is missing.
- Prefer clear “I don’t know / can’t reliably say” over confident speculation.
## 3. Answer Phase (only after Validation Phase)
Once the Validation List is complete and you have applied the guardrail logic:
1. Create a section called "Answer".
2. Provide the best possible answer that:
- Respects all identified constraints.
- Avoids relying on any unmitigated Critical (Severity 4) assumptions.
- Clearly labels major uncertainties, scenarios, or caveats.
3. Where applicable, explicitly reference key assumptions by ID (e.g., “This recommendation depends on A2 and A3”).
If you decide you cannot provide a reliable answer:
- State that explicitly in the Answer section.
- Summarize why (refer to the relevant assumptions and their severities).
- Suggest the minimal set of clarifying questions or data that would allow a more reliable answer.
🎁 Example Output (Running the Prompt)
---
## Sample User Question
> **User:**
> We’re a mid-size SaaS company with a monolith. Should we move to microservices over the next 18 months, and what risks should we consider?
---
## 1. Validation List
### Assumptions (with Severity)
* **A1: Mid-size definition**
* **Description:** “Mid-size” ≈ 100–800 employees, 10–80 engineers.
* **Severity:** 2 (Moderate)
* **Mitigation:** Give guidance in ranges (small/medium/large), avoid hard numbers.
* **A2: Multi-tenant B2B SaaS**
* **Description:** Product is multi-tenant B2B, serving many customer orgs.
* **Severity:** 3 (High)
* **Mitigation:** Offer branching guidance: “if multi-tenant vs single-tenant”; keep data-isolation advice conditional.
* **A3: Already in public cloud**
* **Description:** Monolith runs in a public cloud (AWS/Azure/GCP).
* **Severity:** 3 (High)
* **Mitigation:** Provide two tracks: “if in cloud” vs “if on-prem”; don’t assume specific cloud services as mandatory.
* **A4: Standard, not extreme regulation**
* **Description:** Subject to common SaaS compliance (e.g., SOC 2, GDPR), but not extreme sector regulation (e.g., core banking, defense).
* **Severity:** 4 (Critical)
* **Mitigation:** Make all advice conditional: “if highly regulated, you must add X controls”; avoid prescriptive compliance guidance without more info.
### Missing Data
* Industry and regulatory context.
* Hosting model (cloud vs on-prem).
* DevOps maturity (CI/CD, observability).
* Current system size/complexity and SLAs.
### Hidden Constraints (Inferred)
* Limited tolerance for downtime.
* Finite budget and team capacity (big-bang rewrite unlikely).
### Unresolved Ambiguities
* Is “microservices” an executive mandate or an open option?
* How tangled / centralized is the current data model?
### Tool / Source Limits
* No access to code, infra, or metrics.
* Guidance is pattern-based, not an in-depth architecture review.
---
## 2. Guardrail Logic Application
* **A4 (Critical):** Answer must be conditional on regulatory context; no specific regulatory prescriptions.
* **A2 & A3 (High):** Use scenario-based recommendations (multi-tenant vs single-tenant; cloud vs on-prem).
* **No guessing** about exact team size, infra, or compliance regime; explicitly mark uncertainties.
---
## 3. Answer
### 3.1 Recommended Strategy (Conditional on Assumptions)
1. **Clarify the goal before the pattern (A1, A2)**
* Decide if the primary driver is **scale, reliability, or delivery speed**.
* If your main pain is deployment risk and tight coupling, consider a **modular monolith + selective services**, not “microservices everywhere”.
2. **Use domain boundaries, not service count, as the driver (A2)**
* Run a lightweight domain mapping (billing, auth, reporting, core workflow).
* Aim for a few well-defined bounded contexts; each may become a service or a clean module.
3. **Adopt a strangler pattern instead of a big-bang rewrite (A3)**
* Keep the monolith as the host and peel off one domain at a time.
* Route calls via an API gateway or routing layer so you can migrate traffic gradually.
4. **Align pace with DevOps maturity (A3, A4)**
* If CI/CD, testing, and observability are weak, **improve those first** and limit the number of new services.
* If strong, you can safely increase the number of independently deployed components.
### 3.2 Scenario Highlights
* **If you’re already in public cloud (A3 true):**
* Lean on managed databases, queues, and API gateways.
* Start with 1–2 low-risk domains (e.g., notifications, reporting).
* **If you’re on-prem (A3 false):**
* Consider cloud migration as a separate or preceding step.
* A “lift-and-improve” of the monolith first may reduce risk.
* **If you’re highly regulated (A4 false):**
* Treat each extraction as a controlled, documented change.
* Involve compliance and security in defining service boundaries and data flows.
* Expect a slower, more documentation-heavy path than the 18-month target implies.
### 3.3 Key Risks & Mitigations
* **Operational overload (A3, A4, A5 implicit):** Too many services, weak ops.
* *Mitigation:* Cap service count early; require minimal CI/CD + observability before splitting domains.
* **Data consistency issues (A2):** Distributed data across services.
* *Mitigation:* Start with clear ownership per domain and limited async patterns.
* **Endless migration (A1):** 18 months passes with no visible benefit.
* *Mitigation:* Only extract domains tied to specific business outcomes (SLA improvement, new revenue, etc.).
### 3.4 What I Cannot Reliably Answer Yet
Because of **A4 (Critical)** and the unknowns around hosting and DevOps maturity:
* I cannot safely prescribe a detailed service map, technology stack, or compliance regimen.
* To give a concrete, opinionated plan, I’d need at least:
1. Your industry and key regulations.
2. Cloud vs on-prem (and which cloud, if applicable).
3. A rough view of current CI/CD and incident history.
Until then, treat this as a **pattern-level playbook** rather than a finalized architecture blueprint.
DAY 4 — High Fidelity Template Design
Challenge: Convert a single-use prompt into a reusable template with parameters and constraints.
Prompt:
“Convert the following task into a reusable template with variables, required inputs, forbidden inputs, output constraints, and example usage. Include a version number and an explanation of how future versions can improve.”
Skill: Prompt systemization
Stretch Goal: Add scenario-based variants for multiple use cases.
DAY 5 — Schema-Guided Summaries
Challenge: Summarize with precision and structure.
Prompt:
“Summarize the content using the following schema: Verified Facts, Assumptions, Risks, Contradictions, Required Actions. Require the model to cite text fragments that support the summary.”
Skill: Structured summarization
Stretch Goal: Output in machine readable JSON.
DAY 6 — Multi Persona Reasoning
Challenge: Create a debate between personas with expertise and incentives.
Prompt:
“Create two expert personas with conflicting incentives and have them debate the topic. Then generate a synthesis summary that identifies the strongest arguments, weak points, and areas of convergence.”
Skill: Multi-agent reasoning
Stretch Goal: Add a third persona with veto authority.
DAY 7 — High Quality Question Generation
Challenge: Improve the questions before improving the answers.
Prompt:
“Generate ten higher quality questions that challenge the assumptions, strategic framing, and implicit tradeoffs in the topic. Do not repeat obvious or surface-level questions.”
Skill: Meta-reasoning
Stretch Goal: Group the questions into categories that reflect different ways of thinking.
DAY 8 — Retrieval Disciplined Prompt
Challenge: Force retrieval based reasoning.
Prompt:
“Answer only with the information found in the provided context. If the context does not contain the answer, respond with the phrase: Insufficient context.”
Skill: RAG discipline
Stretch Goal: Add citation formatting rules.
DAY 9 — Code and Tests in One Output
Challenge: Ensure code generation includes coverage.
Prompt:
“Write the code and the matching test suite in one response. Include assertions, edge cases, and a commentary that explains the design choices. Require static analysis of the final code before completion.”
Skill: AI coding operations
Stretch Goal: Include code coverage targets.
DAY 10 — Precision Rewriting
Challenge: Transform text with controlled parameters.
Prompt:
“Rewrite this content for a specific audience. Control tone, intent, reading level, emotional intensity, and structural flow. Provide a quality assurance checklist that verifies correct transformation.”
Skill: Text transformation
Stretch Goal: Add rules for forbidden and preferred phrasing.
DAY 11 — Private Reasoning Control
Challenge: Manage when the model reasons privately.
Prompt:
“Think through the problem privately and do not reveal your reasoning. Produce only the final answer in one concise paragraph and provide a short correctness claim explaining why the answer is reliable.”
Skill: Controlled chain of thought
Stretch Goal: Add token limits for internal reasoning.
DAY 12 — Tradeoff Framework
Challenge: Produce a comparison framework.
Prompt:
“Generate at least three viable options for the decision. Provide a comparison table with strengths, weaknesses, risks, cost, effort, and time to value. Identify the conditions that would shift the recommended option.”
Skill: Strategic evaluation
Stretch Goal: Add weighted scoring.
DAY 13 — Perspective Switching
Challenge: Rewrite content from multiple professional viewpoints.
Prompt:
“Explain the topic from the perspective of a lawyer, an engineer, a CEO, and an economist. Afterward, generate a unified perspective that integrates the most important insights from each viewpoint.”
Skill: Cognitive reframing
Stretch Goal: Add a cultural or geopolitical perspective.
DAY 14 — 360 Degree Expansion
Challenge: Expand a simple idea into a multi-dimensional plan.
Prompt:
“Expand the idea into a full 360 degree analysis including stakeholders, risks, timelines, dependencies, incentives, political considerations, and execution complexity.”
Skill: Strategic framing
Stretch Goal: Add a RACI matrix.
DAY 15 — Advanced Data Extraction
Challenge: Extract structure from noise.
Prompt:
“Extract all entities, metrics, decisions, commitments, dependencies, dates, and risks from the text. Produce output that meets the rules of the JSON schema provided below.”
Skill: Information extraction
Stretch Goal: Add validation logic.
DAY 16 — Self Critique and Revision
Challenge: Improve outputs with critique.
Prompt:
“First critique the draft on clarity, logic, completeness, and coherence. Then produce an improved version that resolves all issues identified in the critique.”
Skill: Self evaluation
Stretch Goal: Add severity scoring.
DAY 17 — Style Guide Enforcement
Challenge: Apply a custom style guide.
Prompt:
“Rewrite the text using the following style guide: tone, cadence, sentence structure, verb patterns, formatting rules, and vocabulary. Include a compliance checklist.”
Skill: Brand and writing consistency
Stretch Goal: Create a template that can be reused for future rewrites.
DAY 18 — Long Context Navigation
Challenge: Manage large inputs effectively.
Prompt:
“Segment the content and produce summaries that preserve meaning at 10 percent, 25 percent, 50 percent, and 75 percent compression.”
Skill: Context abstraction
Stretch Goal: Add thematic clustering.
DAY 19 — Scenario Modeling
Challenge: Generate multiple strategic futures.
Prompt:
“Create best case, expected case, worst case, and black swan scenarios. Explain the drivers of each scenario and identify early warning indicators.”
Skill: Forecasting
Stretch Goal: Add probability scoring.
DAY 20 — Embedded Prompt Chain
Challenge: Build a three stage chain.
Prompt:
“Design a three stage prompt chain that breaks the task into planning, execution, and validation. Each stage must accept the previous output and produce a stricter, more refined result.”
Skill: Modular prompting
Stretch Goal: Add error recovery behavior.
DAY 21 — Risk Identification and Analysis
Challenge: Identify threats early.
Prompt:
“Identify hidden risks, contradictions, untested assumptions, missing owners, and potential failures. Classify each risk by probability and impact.”
Skill: Critical risk analysis
Stretch Goal: Visual risk heat map.
DAY 22 — Meetings to Execution
Challenge: Turn noise into clarity.
Prompt:
“Convert meeting notes into a list of decisions, actions, risks, owners, deadlines, and unresolved questions. Include a summary of strategic implications.”
Skill: Operational clarity
Stretch Goal: Add OKR alignment.
DAY 23 — Reverse Prompt Engineering
Challenge: Deconstruct how an output was produced.
Prompt:
“Reverse engineer the likely prompt that produced this output. Then generate three improved versions and explain why they are superior.”
Skill: Prompt intuition
Stretch Goal: Add risk of misinterpretation analysis.
DAY 24 — High Novelty Creativity Prompt
Challenge: Prevent generic answers.
Prompt:
“Generate ten ideas from this topic that do not resemble the top three typical solutions. Use non-obvious analogies and cross-discipline inspiration.”
Skill: Creative prompting
Stretch Goal: Add feasibility scoring.
DAY 25 — Signature Prompt Design
Challenge: Build your personal operating prompt.
Prompt:
“Create a personal signature prompt that reflects your role, decision style, writing preferences, risk tolerance, and reasoning expectations. Include a version history and guidance for future improvement.”
Skill: Prompt mastery
Stretch Goal: Add multi-mode variants for analysis, planning, and creation.
