Independent Research · April 2026

Social Capital
Launch Intelligence

Current Launch Cohort intelligence for Social Capital client work. We ingest 19 launch slices, roll them into 7 company buckets, and report cleaned plus archived counts.

Scope: 19 launches -> 7 companies | - cleaned posts | - archived rows
Thesis: Launch outcomes in this cut are primarily shaped by posting window, hook style, and repeat creator coverage.
cleaned posts analyzed
companies (campaign buckets)
19 launches in current cohort
unique creators
repeat creators
X (cleaned cohort data)
In 10 Seconds
Clear answers for a first-time reader, without jargon.
Reader First
What this is

This page shows which creators repeatedly post around launch day across different companies.

What we found

We found repeat creators, a strongest posting window, and a repeatable message style.

What to do next

Run a 2-week creator outreach pilot using the top list, best time, and best hook style.

— repeat creators
Best time: —
Best message style: —
📊
Posts Analyzed
7 companies | 19 launches
Best Posting Window
IST · by trim10 likes
🎣
Best Hook Style
by avg post likes
🔑
Repeat Creators Across Launches
active in 2+ company buckets
🧭 What To Do Next
Simple pilot steps for the next 2 weeks.
Action Plan
2-Week Pilot Checklist
  • Pick first 10 creators: use the Primary 10 shortlist below.
  • Post timing default: .
  • Message default: open with framing.
  • Launch focus: prioritize day 0 and day 1 posting windows.
  • Success check: compare median likes and total engagement against your last launch.
Primary 10 Creators To Contact First
Repeat creators in this cut:
    Caveat: this is public launch-pattern analysis, not proof of paid coordination.
    🎯 What This Website Is
    Plain-English context: what was analyzed, why, and how this should be used.
    Project Need
    Why I started this

    I was curious how Social Capital helps brands generate millions of views at launch. If 500+ creators are involved, there should be repeatable operating patterns rather than random posting. I wanted to understand those tactics from observable data.

    What I tried to reverse engineer

    I tested whether launch posts show recurring creator patterns (who posts, when they post, and overlap across brands) and whether messaging language and hooks correlate with specific creator clusters and outcomes.

    Constraints and current state

    This was built in roughly 5-6 hours without Twitter API access and without state-of-the-art models. So this version is a strong reverse-engineering first pass: useful for surfacing likely patterns, not a final causal model.

    🧩 First-Time Reader Path
    Follow this order to get from context to action in under 60 seconds.
    🧭 What We Found
    Top findings in plain language, with direct evidence links.
    Read in 60s
    1) What we analyzed High confidence

    The Current Launch Cohort includes 19 launches grouped into 7 company buckets. This cut uses - cleaned posts and keeps - rows archived for auditability.

    2) What we found Medium confidence

    Best posting window: -. Best hook style: -. Repeat creators active across company buckets: -.

    3) What it means Directional

    Use the top window and hook style as default launch playbook, then run one holdout test cell per launch.

    🧠 Where Is Messaging Language Reused?
    Launch-window clustering by language pattern and intent to surface directional coordination signals.
    Coordination Signals
    Clusters Found
    Strong Signals
    Creators In Clusters
    Signal Intent Repeated Language Pattern Structure Cues Creators Company Buckets Example Posts
    Directional Evidence: supporting Clusters indicate likely coordinated messaging patterns, not proof of financial or contractual relationships.
    Caveat: language reuse can come from common product narratives or organic trend convergence.
    🛡️ How We Double-Checked
    Stress tests for key findings across alternate filters and random baselines.
    Defensibility Layer
    Check Baseline Variant Result Stability
    Robustness summary: -.
    Limitations: text-pattern similarity is directional, not contractual proof; this cut is X-platform heavy; cluster thresholds can miss subtle coordination or include organic meme effects.
    🧪 How Data Was Cleaned
    Plain-English rules used before analysis so conclusions stay interpretable.
    Methodology Snapshot

    Rows kept for analysis

    • Relevant posts mapped to a known cohort company bucket.
    • Launch-window posts (day -1 to day +3) are always retained.
    • Non-launch posts pass burst and quality checks before inclusion.

    Rows archived (not deleted)

    • Far and low-signal rows: abs(day) > 60 and engagement <= 2.
    • Burst excess: only top 2 rows kept per creator + company + date.
    • Reply noise: short @replies with near-zero engagement.
    • Low-signal tail: likes=0 and reposts=0 after day +2.

    Glossary

    • Launch slice: one cohort source file tied to a specific launch moment.
    • Company bucket: normalized brand grouping used for cross-launch comparison.
    • Active cleaned: rows used by KPIs, charts, and findings.
    • Archived: rows excluded from active analysis but retained for audit.
    High confidence Cleaning outcome: - rows in active analysis and - rows archived for audit and replay.
    Count reconciliation: - full corpus = - active cleaned + - archived.
    QA checks on each data refresh: launches remain 19, company buckets remain 7, full corpus equals active plus archived, and dashboard KPIs/charts continue to read active cleaned rows only.
    🔑 Creators We Should Talk To First
    Cross-company repeat creators from cleaned data. Treat this as public-pattern evidence, not contract proof.
    Most Valuable Intel
    Handle Company Buckets # Posts Median Likes Mean Likes P90 Likes Status
    Directional Evidence: supporting Finding: creators appear in 2+ company buckets, but only appear in launch window (day -1 to +3) of 2+ company buckets in this cut. Top overlap handle: . Evidence: profile sample.
    Action: Maintain a pre-approved repeat creator list and schedule them for day -1 to day +1 across launches.
    Caveat: creator overlap indicates coordination potential, not proof of paid or contractual relationships.
    When and How Should We Post?
    When posts go live and which formats earn the most engagement.
    Time of Day vs. Avg Likes
    IST (UTC+5:30) · trim10 likes by hour
    Medium confidence Evidence: direct 7–11am IST trim10 likes: . After 8pm trim10 likes: . Relative difference: %. Evidence: sample post.
    Action: Default launch-day drops to the top window and reserve one alternative slot for experimentation.
    Caveat: timing effect may vary by company bucket, creator mix, and launch objective.
    Hook Type vs. Robust Engagement
    Trimmed mean (10%) of total engagement
    Medium confidence Evidence: direct Top robust hook: . It performs x vs Bold Claim on trim10 engagement. Evidence: sample post.
    Action: Start launch copy with the top hook style and keep Bold Claim as a controlled variant.
    Caveat: hook taxonomy is heuristic; class boundaries can overlap on nuanced posts.
    📐 Which Format Choices Improve Baseline Performance?
    Thread impact and outlier diagnostics for cleaner interpretation.
    Thread vs. Single Post (X)
    Median and mean likes by format
    Medium confidence Evidence: supporting Thread median likes: . Single-post median likes: . Relative gap: %.
    Action: Use short threads for complex announcements and single posts for quick product updates.
    Caveat: format impact can be confounded by creator size and topic complexity.
    Outlier Diagnostics (Likes)
    Overall distribution checkpoints
    High confidence Evidence: direct Median likes: , mean likes: , p99 likes: . Use robust metrics before means.
    Action: Report medians and trim10 as default KPIs; treat p99 spikes as special-case wins.
    🚀 How Concentrated Is Launch Timing?
    Post concentration around launch windows by company bucket.
    Launch Operations Intel
    Medium confidence Evidence: supporting Day 0 concentration: % of posts in selected company view. Highest bucket: .
    Action: Build a day 0 + day 1 cadence template, then adapt by company bucket based on concentration.
    Caveat: rollout concentration does not isolate causality from announcement strength.
    📡 Which Outliers Should Be Isolated From Baseline Planning?
    Extreme-like posts separated for inspection so they do not distort baseline readings.
    Quality Filter
    Handle Company Bucket Hook Type Thread Likes Reposts Replies Engagement Hour (IST) Day Tweet
    High confidence Evidence: direct p99 cutoff: likes. Dominant hook types among outliers: . Evidence: top outlier post.
    Action: Keep a separate outlier review lane so breakout posts inform creative, not baseline planning.
    📋 Full Data Details
    Audit section. Cleaned view shows active analysis rows; Full Corpus includes archived rows.
    showing — rows
    Handle ↕ Company ↕ Date Hr ↕ Hook Thread Likes ↕ RT ↕ Replies ↕ Engagement ↕ Day ↕