Pay-to-play puts you on the review pages AI pulls from. Community work puts you in the talk AI pulls from too. Profound's October 2025 study of one billion AI cites measured Reddit at 1.2% on ChatGPT. The rate was 2.3% on Google AI Overviews. It hit 6.3% on Perplexity. ChatGPT's Reddit rate runs from 1% to 14%. It depends on the query and the active index. The community surface is not optional. Not in any field where buyers talk about brands in public. This chapter covers the platform map. It covers the 90/10 value rule. It covers karma lines. It covers the operator-led pattern that wins cites. It covers sentiment work on bad threads. It ends with worked cases across B2B SaaS, services, and consumer brands.
Why This Technique Matters
AI Search pulls from community sites for a reason. Community talk shows what review pages cannot. Real operator views. Blunt opinion. First-hand stories the model trusts as a key signal. Review sites feed compared data. Community sites feed the texture AI weaves into answers. Both add up. Neither one replaces the other.
The platform math is important. Reddit's rate runs from 1.2% on ChatGPT up to 6.3% on Perplexity in normal periods. ChatGPT pulls 1% to 14% from Reddit during heavy cycles. LinkedIn drives most of Microsoft Copilot's cites. Hacker News pulls into technical queries across all systems. Quora threads also index for AIO and Gemini. Even on talks years old. The full share across community sites is much larger than most brands think. The lift from steady work reaches the same scale as Pay-to-Play in many fields.
Community work has a side effect that adds to the rest of the Mentions pillar. A named operator gets known. A founder who posts on Reddit each week becomes an entity AI links to the topic. Six months of steady posts on r/SaaS, r/Entrepreneur, or other subreddits makes the operator a known voice. The lift carries across surfaces. The operator gets quoted in third-party articles (Chapter 3). They get asked onto podcasts and panels. AI cites them in answers even outside the first community.
The cost of getting community work wrong is real. Some brands push on Reddit with brand posts, link drops, and fake upvotes. Both the community and the platform punish them. The punishment stays in the data the model pulls from. The brand's Reddit citation share turns from a plus into a minus. Fixing a bad Reddit run takes 12 to 24 months. It often needs a new operator handle. The rules are not optional. Pay-to-play platforms tolerate sloppy work. Community platforms do not.
The Community Platform Landscape
Community platforms fall into four groups. Most brands work on two or three. The right two or three depend on where the buyer hangs out. They also depend on which AI systems pull from each.
Reddit is the main discussion site for most B2B and technical categories. Subreddits are narrow. Finding the right two or three for your category is the first job. Operator presence on r/SaaS, r/Entrepreneur, r/Marketing, r/sysadmin, r/PPC, or niche subreddits earns the citation rates Profound measured. Brand accounts (apart from operator accounts) are allowed. But they are less useful. Reddit rewards real people, not brand voices.
Most outside operators miss one fact about Reddit. The community spots fakes fast. Account age, comment quality, karma source, and post style all show if the account is a real member or a brand rep on the job. The 90/10 rule and the 500-karma rule exist for this reason. Below those lines, the community treats the operator as fake.
Investment: little cash cost. The real cost is operator time. The operator gives 2 to 5 hours a week to real Reddit work. A coordinator can help. They watch the right subreddits and flag good post chances. That saves the operator time. But outsourcing the posts fails. The operator must post.
Quora
Quora is the main Q-and-A site for AIO and Gemini citation share. Google indexes old Quora threads deeply. Answers from named experts on topical questions stay live for years. They keep earning AI citation share long after the answer is written. The pattern is not like Reddit. Longer answers work better. Pick high-volume questions in the topic. Use one steady author handle.
Quora Credits and follower count are the platform's reputation signals. AI citation share lifts once an author crosses 100 Credits and 30 strong topic answers. The named-expert effect is strong here. AI systems weight known authors many times higher than unknown handles.
LinkedIn is the main community surface for Microsoft Copilot. It is also a strong second surface for Claude and Perplexity. LinkedIn long-form posts act as community posts in AI retrieval. The site feels more like publishing than discussion. But the retrieval signal is strong. Bing indexes LinkedIn for Copilot. Google indexes it for AIO and Gemini.
The LinkedIn pattern is simple. The operator posts one to three long-form posts a week. They also comment on topic-relevant posts. Brand accounts post less well than named operators. The site rewards steady work over bursts. An operator posting twice a week for a year builds much more authority than twenty posts in a month.
Industry-Specific Forums
Most categories have one or two niche forums beyond the big cross-category platforms. Hacker News for tech and startups. Designer Hangout and Spectrum for design. Indie Hackers for SaaS founders. GitHub Discussions for open source. Stack Overflow for technical. r/Discord for community-aware tools. Designer News for design news. Mode for analytics. Niche forums often have higher per-post AI citation share than general forums. The topical match is tighter.
Finding the right forums takes work. Use two methods. First, check the AI citations for your category-defining queries. Community surfaces that show up in those citations are worth the time. Second, ask your existing customers where they hang out. The places they name often match the AI-cited surface.
The Community Citation Half-Life by Platform
Community posts decay as citation sources at very different rates. The platform's index and the AI retrieval pattern decide the curve. Brands that ignore the curve burn operator time on posts that age out fast or under-invest in platforms where the citation curve compounds for years. The half-life view tells you where to spend. The broader pattern of how fast AI citations decay applies here per platform, not as a single category-wide number.
Hacker News. Fastest decay among the major community platforms. Most citation weight lives in the first 30 to 60 days. After 90 days, an HN thread carries roughly 25% of its peak citation contribution. After a year, near zero. The HN pattern rewards launch-quality posts paced at one strong post every 6 to 9 months. Filler posts in between get little long-tail lift.
Reddit. Fast-to-medium decay. Most citation weight lives in the first 6 to 12 months. After 18 months, a Reddit thread carries roughly 30% of its peak contribution. The decay varies by subreddit: high-activity subreddits (r/SaaS, r/marketing) decay faster than narrow ones (r/sysadmin, niche subject forums). Reddit work needs steady cadence to keep the cited surface fresh.
LinkedIn long-form. Medium decay. Most citation weight lives in the first 90 to 180 days. The decay slows as the post collects comments, reactions, and shares; engaged posts can carry citation weight into year two. The LinkedIn pattern rewards quality plus engagement maintenance (replying to comments for weeks after publication, not just the first 48 hours).
Quora. Slowest decay among major community platforms. A strong Quora answer can carry citation weight for 3 to 5 years, sometimes longer. Google indexes old Quora threads deeply, which feeds AIO and Gemini long after the answer was first written. The Quora pattern rewards depth over volume: one strong, well-formatted answer on a high-volume question outperforms ten light answers on niche questions.
Niche forums. Highly variable. Stack Overflow answers carry citation weight for years if accepted by the community (similar to Quora). Forum posts on platforms without strong indexing (Discord-style chats, ephemeral forums) decay within weeks. Check the platform's indexing posture before committing operator time.
The half-life view shapes the operator-time allocation. A program targeting Quora and Stack Overflow can build long-decay citation infrastructure with fewer posts per month. A program targeting Reddit and Hacker News requires sustained cadence to keep the fresh-post citation surface alive. Mixing platforms produces a layered citation curve: short-decay platforms drive immediate lift; long-decay platforms compound across years.
The 90/10 Rule
The community rule is firm: 90% value, 10% brand mention. Operators who flip the ratio earn no citation share. They often get banned. Operators who hold the ratio build entity-level recognition. That recognition adds up across the topic.
Value work looks like this. Answer questions in the operator's field. Add real comments on others' posts. Share useful resources, no matter who made them. Correct wrong claims when the operator has the standing to do so. Tell stories from real work, without naming the product. The pattern earns karma, followers, and respect without naming the brand at all.
Brand mentions live in the 10% slot. The mention works when the question really calls for the brand's product or service. The mention is honest about who the operator works for ("I work at Searchbloom"). The mention fits the talk, not a sales pitch ("we built this because the same problem hit our partners; here is the link if useful"). The mention skips the sales script.
Operators who track the ratio hit 90/10. They use a simple sheet of monthly posts tagged as value or brand. Operators who do not track drift to 70/30 or worse within three months. The citation curve goes flat.
Karma and Reputation Thresholds
Each platform has a gate. Below the gate, the platform and the community treat brand mentions as low-trust. Above the gate, mentions earn proper weight.
Reddit. 500 karma in that subreddit before any brand mention. Cross-subreddit karma counts less. Subreddit-level karma shows real work in that community. Operators with under 500 karma who mention their brand get downvoted or shadow-banned. We have measured this line across many partner engagements.
Quora. 100 Quora Credits plus 30 strong topic answers. The volume count matters. Quora's algorithm weights answer authority by the author's topic history.
LinkedIn. Less black-and-white than karma. The same idea applies: 90 days of steady value-add posts and comments before any link-to-owned-content posts. The site's algorithm down-weights link-heavy operators who lack a steady posting habit.
Hacker News. 200 karma before any Show HN or Ask HN about the operator's product. Show HN posts from low-karma accounts get flagged. The same post from a known account lands fine.
Industry-specific forums. The signal varies but stays the same in spirit. Does the community know the operator's handle without notes? Operators who must introduce themselves are below the line. Operators others cite by name are above it.
The cost of the threshold is operator time. Reddit's 500 karma takes about 90 days at moderate work. Quora's 100 Credits takes 60 days. LinkedIn's posting line takes 90 days. The work adds up. An operator above the line in one community hits it faster in nearby ones. The posting habit and voice are already in place.
The Karma Velocity Index: A Cadence Health Diagnostic
Karma thresholds tell you when an operator is above the line. They do not tell you whether the operator is still active. An operator who crossed 500 karma in their main subreddit eighteen months ago, then went quiet, looks authoritative to the platform's gating logic but registers as inactive to the community and to the algorithm that weights brand-handle recency. The Karma Velocity Index is a Searchbloom-coined diagnostic that produces a single number for cadence health.
KVI = (karma earned in the last 90 days) / (operator's months active in that platform)
The output is karma-per-month, weighted toward recent activity. Reading bands by platform:
- Reddit, KVI above 50. Healthy cadence. The operator is posting often enough that the community reads them as active. Brand mentions in the 10% slot carry full weight.
- Reddit, KVI 20 to 50. Maintenance cadence. The operator's presence holds but is not growing. Acceptable for mature programs after the first 12 months.
- Reddit, KVI below 20. Decay cadence. The operator is fading. Brand mentions from the account carry reduced weight even with the karma history.
- Quora KVI bands use Credits earned rather than karma: above 15 Credits per month is healthy, 5 to 15 is maintenance, below 5 is decay.
- Hacker News KVI bands use karma earned: above 30 per month is healthy, 10 to 30 is maintenance, below 10 is decay.
The KVI is the single sign that tells operators and program managers whether the cadence is producing real engagement or whether the operator is coasting on past karma. Track it monthly alongside citation-share metrics. Operators with declining KVI usually show declining citation share 60 to 90 days later. The KVI is a leading sign. Citation share is the lagging one.
The Operator-Led Participation Pattern
Community work succeeds when a named operator leads it. Brand accounts produce less lift than named operators in every measured engagement. The reason is the entity-recognition signal AI systems weight. A named operator who posts often on a topic becomes a citable entity. The brand gains from the operator's known authority.
The plan we suggest is one named expert per community. A second expert can join in year two once the first is set. More than one expert on the same site splits the entity signal. The site treats them as separate operators, not as the brand's voice.
The operator's job looks like this. Spend 2 to 5 hours a week on real work. Post on a steady beat, not in bursts. Reply to comments on their own posts within 24 hours. Post on others' threads more often than their own. The pattern hits steady-state recognition at 6 to 9 months. Real AI citation lift starts at 3 to 6 months.
Support comes from a content coordinator. The coordinator watches the target communities. They flag good post chances. They draft first-pass replies for the operator to edit. The operator does the posting. The coordinator does the search and the draft. The split scales the operator's reach without losing the real voice that makes the work succeed.
Sentiment Management on Negative Threads
Negative talk hits every brand with real presence. The brand's reply pattern decides if the thread turns into a long-term citation surface against the brand. Or it gets fixed clean.
The pattern is clear. Reply within 48 hours under the real brand handle or named-operator handle. Name the exact complaint in plain words, not in marketing-speak. Offer to take it offline if it fits. Post a public follow-up when it is fixed, with what changed. AI systems and community readers read brand silence as proof of the claim.
One case is different. A thread where the brand is named in a clearly false context. Then a direct reply with proof is the right move. Emotion is not. Reply with sources. Cite public facts. Offer to talk more if the poster wants. Most mods can tell a good-faith fix. They treat it well.
The Negative Thread Triage Framework
The default "reply within 48 hours" rule works for most negative threads. Some threads need a different response pattern. A 4-quadrant triage frame lets the brand match response style to thread shape. The two axes: severity (factual claim vs opinion) and reach (small thread vs viral).
- Small thread, factual claim. Direct correction. Cite the public source that disproves the claim or supplies the missing context. Keep the response under three paragraphs. Resolve in public if possible. Most factual disputes in small threads end within 48 to 72 hours when the brand responds with sources and stays clean of tone.
- Small thread, opinion. Empathetic acknowledgment. The poster's opinion is valid even if the brand disagrees. The reply acknowledges the experience, offers context if relevant, and offers an offline path if the underlying issue has a fix. Do not argue. Opinions rarely change in public threads. What changes is the watching audience's read of the brand's reply pattern.
- Viral thread, factual claim. Brand-handle correction plus named-operator follow-up. The brand-handle reply happens fast (within 24 hours) and addresses the factual specifics with sources. The named-operator reply happens 24 to 48 hours later, with operator-voice context and personal accountability. The two-layer pattern produces stronger correction signal than either alone. Communities and AI systems both weight the named-operator reply as more credible.
- Viral thread, opinion. Transparent acknowledgment plus visible commitment to improvement. The brand cannot rebut a viral opinion through argument. The right move is acknowledging the underlying experience pattern, committing to a specific improvement with a timeline, and posting a public follow-up when the improvement lands. The pattern works because the watching audience grades the response, not the original poster.
The triage frame replaces a one-size response with a four-pattern response. Brands using the frame measure their negative-thread engagement at a rate two to three times the unwatched-pattern baseline. AI citation share on the contested category queries usually returns to pre-thread baseline within 90 to 180 days. Brands that respond with the wrong pattern (arguing with opinion in viral threads, brushing off factual claims as opinion in small threads) often see citation share decline persist for 12 to 18 months. The response pattern matters more than the response speed.
Measure the work two ways. Track sentiment trend on community surfaces over time. Use Alertmouse, Ahrefs Brand Radar, or a peer tool. Brands with active sentiment work see negative sentiment as a share of brand mentions drop over 12 to 18 months. Brands that do nothing see negative sentiment add up. The model treats unanswered threads as true.
Worked Examples
An HR-tech SaaS founder gave 5 hours a week to Reddit across r/humanresources, r/PeopleManagement, and r/recruiting. Start point: 0 karma in any of the three subreddits.
Pattern. 3 to 4 strong answers a week. They focused on real-work questions where the founder had direct skill. No brand mention in the first 90 days. Total karma crossed 500 across the three subreddits by month 4.
Brand-mention phase. From month 5 on, the founder used brand mentions in the 10% slot. Only when the question really called for the product as the answer. Mentions were honest about affiliation. They fit the talk, not a sales pitch.
Outcome at month 18. The community knew the founder's handle by name in nearby subreddits. AI citation share for HR-tech founder queries grew from below the line to 23% on ChatGPT and 31% on Perplexity. Three industry articles cited the founder. The writers found him through Reddit threads. Total cost: founder time plus a part-time coordinator at about $1,800 a month.
A small consulting firm. The CEO agreed to publish on LinkedIn three times a week. Start point: 4,200 followers, off-and-on posts, no measured citation share on Copilot or Claude.
Pattern. Monday: long-form post on a partner-work note. Wednesday: short post on a topic-relevant news item. Friday: long-form post on a method question with operator view. Plus 5 to 10 strong comments a week on others' posts. The CEO held this beat for 12 months.
Outcome at month 12. Follower count grew from 4,200 to 14,800. Three long-form posts crossed 100,000 views each. One went on to an industry magazine. That produced more citation lift. AI citation share on consulting-evaluation queries grew on Copilot. (LinkedIn feeds Copilot most.) Five third-party articles cited the CEO. The writers found him through LinkedIn.
A direct-to-consumer brand had mixed Trustpilot reviews. Several active subreddits had buyers talking about the product. Start point: 3.4 TrustScore on Trustpilot. Three Reddit threads with mostly negative tone.
Pattern. Brand-handle reply on every new negative Trustpilot review within 48 hours. Brand-handle reply on negative Reddit threads with a clear nod to the complaint plus an offline-fix offer. Named-operator work in the right subreddits. Built the 500-karma line over 90 days before any brand mention.
Outcome at month 9. Trustpilot TrustScore moved from 3.4 to 4.2. The reply motion lifted buyer trust. Reddit sentiment moved from mostly negative to mixed. Brand replies now sat next to the first complaints. AI citation share for consumer brand queries lifted. The model pulled the brand-side fixes alongside the first negative threads.
A developer-tools founder built an open-source observability product. He set up a Hacker News program. Start point: 89 karma on a long-dormant account. No Show HN history. No community recognition. The category lived mostly on HN. (Some talk also happened in r/devops and r/sre.) The founder had read HN for years. He had never posted much.
Pattern. 4 to 6 hours a week of real work on HN front-page threads. He held the cadence for 6 months before any Show HN. The work broke into three parts. First, thoughtful technical comments on others' Show HN launches. He focused on design questions, real-world tradeoffs, and useful feedback based on production work. Second, real comment threads on trending posts in nearby topics. He covered distributed systems, observability tools, and incident response. He shared real-work stories. He asked questions that brought hidden facts to light. Third, sometimes he posted high-signal third-party content. He shared research papers, post-mortems, and technical deep-dives he found useful. He never linked to his own work.
Karma threshold crossing. Karma hit 300 by month 4 and 800 by month 8. The curve was not flat. The first 100 points took two months. He was still learning the site's voice. The next 700 came over the next six months. The cadence and style locked in. By month 8 other users cited him by username in HN threads ("as @username pointed out last week"). That is the sign the community treats the operator as a known voice, not a newcomer with a goal.
First Show HN launch. Month 9. The Show HN went live with the right prep. A working demo URL, no signup wall. A real write-up of the technical picks and the work problem the tool fixed. A clear list of limits and known bugs. A vow to answer every comment in the thread for 48 hours. The founder posted on a Tuesday at 8 a.m. Eastern. That is the working HN window for technical launches. He stayed online all day to reply in real time. The post hit the front page in 90 minutes. It stayed in the top 30 for most of the day.
Outcome at month 12. The community knew the founder's username across HN technical talk. He showed up more often in nearby subreddits where HN users cross-post. Later Show HN posts (a v1.1 launch in month 11) ran 3 to 5 times the typical Show HN traction for the category. The measure was upvote pace and comment depth in the first six hours. AI citation share for the technical category queries lifted on Perplexity. (Perplexity weights HN heavily in technical retrieval.) The founder's username and product name began to appear together in Perplexity answers to observability queries by month 12. Total cost: founder time plus a developer-relations coordinator at about $2,400 a month. The coordinator found post chances. He drafted comment outlines for the founder to expand.
Honest caveat. Hacker News works for technical brands with real production work behind them. The operator must hold their own in tough comment threads. The pattern does not work for consumer brands. (The crowd is wrong.) It does not work for B2B services either. (HN does not trust agency-flavored content.) Operators thinking about HN should be honest. Does the founder or technical lead have the depth to keep front-page-quality posts going? Work without that depth gets flagged by the community. It hurts the brand more than silence would.
Migrating an Outsourced Community Program to Operator-Led
Most brands that pick community work as a priority already have some outsourced setup in place. They use agency-run Reddit accounts. Or they have a contractor write Quora answers. Or a junior team member runs LinkedIn under a generic brand voice. The move from that outsourced setup to real operator-led work is one of the most delicate sub-projects in the Mentions pillar. Most moves fail. Brands miss how big the gap is.
The first reason outsourced programs fail is simple. Community surfaces spot fakes faster than any other channel. Reddit's mod teams keep informal lists of sock-puppet patterns. Account birth dates cluster across related handles. Posting beats ignore real time zones. Word choice patterns repeat across handles. Brand mentions cluster in subreddits that should be unrelated. Quora's algorithm hides contractor-pattern answers on the same kinds of signals. LinkedIn's algorithm down-weights accounts with agency-style work patterns. The tools are not perfect. But they are good enough that a long outsourced run leaves clear fingerprints. AI systems pull from that data later.
The second reason is the entity-recognition gap. AI systems weight named-operator entities heavily. The named voice in an outsourced program keeps shifting. Contractors rotate. Agencies swap staff. Brands switch vendors. The result is split authority across many low-known handles. There is no one known voice. Chapter 10's entity-coherence work needs a stable named voice. Outsourced programs cannot give that.
The move is hard. The outsourced handles have karma. The brand cannot honestly inherit it. Handing an outsourced Reddit account to a founder is itself a fake the community can spot. The writing voice shifts. The posting beat changes. The topic mix moves. New operator handles start at zero karma, zero followers, zero recognition. The brand has a choice. Keep a program that quietly hurts citation share. Or start from a base that will take 6 to 12 months to show lift. The right answer is almost always to move. But the path matters.
The working pattern is a two-track move over 6 to 9 months. Track one keeps the outsourced monitoring active. The brand does not go dark on the platforms while the operator builds presence. Monitoring covers inbox watch for direct brand mentions, sentiment trend, and a feed of post chances for the operator. Track two runs the operator-led build in parallel. The operator opens new handles on each target platform. They follow the 90/10 rule from day one. They invest in karma. They post under their own name. They disclose the brand link when it matters. The two tracks overlap for the full move window. Posting under the outsourced handles tapers as the operator's karma crosses each platform's line. By month 6 to 9, the operator is the main voice across all target communities. The outsourced posting has stopped.
What to do with the outsourced handles after the move matters. Three options. First, retroactively disclose the brand link in the account profile or bio. Some platforms accept this. Reddit does not. Second, repurpose the handles as customer-success channels with clear brand attribution. This is fine on Trustpilot, Quora business profiles, and LinkedIn brand accounts. It is risky on Reddit. Third, retire the handles. Stop activity and let them go dormant. The retire option is the safest. Searchbloom suggests it in most engagements. The other options carry tail risk. The first hidden run (or how it looks) can show up later via community researchers or rivals.
The brand-handle question is its own item. Treat it apart. Reddit allows official brand accounts. They have their place on sites where the crowd expects a brand voice (Trustpilot replies, LinkedIn brand pages, Quora Spaces). On Reddit, the brand handle works for sentiment replies and FAQ-style answers. It rarely earns citation share on its own. Named operators produce the bulk of the lift. The mix that wins in measured work: one named operator per main community, plus a brand handle for defensive sentiment work and direct-reply tasks. If the brand handle was misused before (sock-puppet upvotes, fake reviews, content theft), it carries debt. The new operator should not adopt it. The brand handle then needs its own karma rebuild. Or it should retire.
One worked example shows the pattern. A marketing-automation brand learned from an Ahrefs Brand Radar audit that their three-year agency-run Reddit work had built up negative sentiment. The hits hit r/marketing, r/smallbusiness, and r/SaaS. The agency had posted under four contractor handles. They pushed brand mentions. The brand-mention rate was closer to 40/60 than 10/90. The CMO took on a 9-month move. Months 1 to 3: the CMO opened a personal Reddit handle. She disclosed her role in her profile. She gave 4 hours a week to value-only posts across the three target subreddits. Start karma: 0. The agency kept monitoring but cut posts to one per week per handle. Months 4 to 6: the CMO crossed 500 karma in r/marketing by month 5 and in r/SaaS by month 6. The agency moved to monitoring only. Months 7 to 9: the CMO was the main voice across all three subreddits. Brand mentions sat only in the 10% slot. Only when the question really called for the brand's product. The contractor handles went dormant. By month 12 (three months past the end of the move), AI citation share for marketing-automation operator queries on Perplexity moved from a negative baseline to a positive one. Two industry magazines cited the CMO by name. The writers found her through Reddit. Total cost over the move: about $35,000 in CMO time at loaded rate, plus the agency retainer at reduced scope.
The signs that show if the move is working work at three levels. Karma growth pace on the new operator handles is the lead sign. An operator on track hits 100 karma in month 1, 300 by month 3, and 500 by month 6 in their main target community. The comment-to-post ratio is the second. A healthy operator runs 4 to 8 comments for every original post. That mirrors the value-first focus. Sock-puppet flags trending down is the third. If the brand had mod warnings, shadow-bans, or call-outs, those should drop to zero in the move window. If the signs flatten or the flags keep coming, the move has stalled. The brand needs to audit. Is the operator really posting? Or has the agency quietly resumed?
One last point. Moves from outsourced to operator-led work often bring up team pushback. The agency or contractor that built the outsourced setup has money on the line. They argue against the move. The marketing team that owned the program may feel called out for the past path. The operator who agrees to lead has to give several hours of personal time a week. The role may not have planned for that. The choice to move is a CEO or CMO call. The program math, the brand reputation, and the operator's time all sit above the working level. Treating it as an operational handoff, not a strategic call, is one of the more common ways moves stall.
The handoff between the operator and the coordinator needs care. The operator owns the voice. They pick the topics. They make the post calls. They reply to every comment on their own posts. The coordinator owns monitoring. They flag which threads need attention. They spot new questions in the operator's field. They draft outlines (a paragraph-level skeleton the operator turns into real voice). They track signs (karma curves, citation share, sentiment trend, tracked weekly). They handle platform-specific tasks (format, link hygiene, posting windows). The split lets the operator spend most hours on the high-value work. The coordinator does the search and tracking. The coordinator role runs $1,800 to $2,800 a month at part-time hours for a single-operator program. Bigger programs with two or three operators need a bigger coordinator role.
A related move question is whether to admit the move in public. Three options. Silent move: the operator opens new handles. The outsourced handles go quiet. No comment. Soft note: the operator's bio mentions their role and the platform context. (LinkedIn-style. It fits when the platform expects this disclosure.) Open word: a public post or article tells the story of the move and why. Open word carries some short-term cost. The brand admits the past path was not the best. But it earns long-term credit with communities that value real voices. Searchbloom has seen silent and open work both succeed. The pick depends on the past damage. If the silence would read as ducking, go open. For brands with no public incidents, silent is the default. For brands with public pushback, open word cuts the tail risk of a rival surfacing the past later.
Moves also need a plan for the first public pushback after launch. The common cases. A regular in the subreddit calls out a past pattern from the outsourced era in a thread. A mod messages the operator in private about the link between the new handle and the older agency accounts. A rival points to the move as proof of bad faith. The right reply in all three cases is the same. Acknowledge it. Use plain words about what changed. Commit to show the change over time through actions, not arguments. Operators who deflect or relitigate the past add to the damage. Operators who reply clean and keep up the steady work get their community standing back in 60 to 120 days. The pattern echoes the sentiment work earlier in this chapter. Real talk beats the other path, even when the truth is not pretty.
Platform-Specific Considerations
Each AI system weights community sites in its own way. The high-leverage pick depends on which one you most need to reach.
- ChatGPT. Reddit-heavy. It pulls hard from Hacker News and niche forums too. Profound's October 2025 data showed the Reddit rate runs from 1% to 14% across queries. The range tracks the model's retrieval window in any one session.
- Claude. Weights LinkedIn long-form and named-expert Quora answers heavily. Less Reddit pull than ChatGPT. More choosy on what it cites.
- Perplexity. The most community-heavy AI. 6.3% Reddit rate. Plus strong pulls from Hacker News, Quora, and niche forums. Brands shrink in Perplexity if they skip the community layer.
- Google AI Overviews. 2.3% Reddit rate plus heavy Quora pull. Google indexes Quora threads deeply. Brands with strong Quora author work over-index in AIO for evergreen queries.
- Gemini. Like AIO. They share retrieval. The same Quora and Reddit patterns apply.
- Microsoft Copilot. LinkedIn-driven. Microsoft owns LinkedIn and pipes it in direct. Brands chasing Copilot need strong LinkedIn work above all else.
Industry Variants
Ben Wills's March 2026 study of 145 fields showed which platform wins in each.
- Reddit-heavy fields (B2B SaaS, consumer tech, gaming, finance). Reddit is the main bet. LinkedIn is second. Operator-led work brings real, measured lift.
- LinkedIn-heavy fields (B2B services, pro services, enterprise software, HR tech). LinkedIn long-form is the main bet. Reddit is second. The named-operator posting beat drives the lift.
- Quora-heavy fields (consumer products, personal finance, education, health). Quora's evergreen indexing builds AIO and Gemini share for years. Author-handle Quora work is the lever.
- Niche-forum fields (developer tools, design tools, indie SaaS). Hacker News, Indie Hackers, GitHub Discussions, and Designer Hangout beat the big general sites. Operator presence in niche forums earns category lift.
- Low-community fields (heavy-regulated fields, niche enterprise lines). Community work has less lift. Budget moves better to Pay-to-Play (Chapter 1) and Third-Party Corroboration (Chapter 3).
The Cross-Platform Citation Stacking Effect
Operators with presence on multiple community platforms get cited at rates above the linear sum of their per-platform presence. The reason is entity coherence. AI systems weight a cited entity more heavily when the entity is corroborated across multiple surfaces. The same named operator showing up on Reddit, LinkedIn, and Quora reads as a real category authority. The same name on a single platform reads as a single-platform expert.
The math, measured across mid-market B2B engagements, sits at:
- 1 platform. Baseline citation share.
- 2 platforms. Roughly 1.6x baseline.
- 3 platforms. Roughly 2.5x baseline.
- 4 platforms. Roughly 2.8x baseline.
- 5 or more platforms. Diminishing returns. The marginal lift drops below 0.2x per added platform in most categories.
The stacking effect compounds with the half-life view earlier in this chapter. An operator running Quora plus LinkedIn (slow-decay plus medium-decay surfaces) earns citation share that compounds across two years. The same operator running Hacker News plus Reddit (fast-decay plus medium-decay surfaces) earns short-term lift but requires sustained cadence to hold it. Mixing decay profiles produces the smoothest citation curve.
The stacking effect also explains why moving from outsourced to operator-led work earns disproportionate lift. Outsourced programs typically spread thinly across many platforms with low-recognition handles. The entity coherence is zero or negative. AI systems cannot tie the surfaces to a single entity. An operator-led program with three platforms and one named voice often outperforms an outsourced program with seven platforms and twelve handles, even on raw post volume.
Common Mistakes That Defeat Community Work
1. Flipping the 90/10 ratio. The most common failure. The brand pushes sales posts. The community and the platform both punish it. Test: across the last 30 posts, what share were brand-tied?
2. Outsourcing the posts. Agency posting under a fake operator handle gets spotted fast. The trust loss is hard to undo. Test: is the named operator writing and posting every entry under their handle?
3. Skipping the karma line. Brand mentions from accounts under the line make a minus, not a plus. The community downvotes the post. The site flags the account. Test: does the operator have at least 500 subreddit-specific karma before any brand mention?
4. Bursts rather than steady work. Twenty posts in a month and then silence has little lift. Two posts a week for twelve months earns real entity recognition. Test: what is the standard deviation of the operator's weekly post count over the last 12 weeks?
5. Skipping negative threads. Brand silence on negative talk builds up against the brand. AI systems pull the unanswered claims as true. Test: what is the time-to-reply on the last ten negative threads?
6. Many operators on the same site. Splitting the entity signal across two or three brand-side operators in the same subreddit hurts recognition. One named operator per community wins. Test: does the brand have one named operator per main community?
7. Treating community work as a project, not a program. Programs that run for 90 days and stop have little lift. The recognition curve takes 6 to 12 months. Programs ending before that waste the early work. Test: is there a 24-month plan for steady operator work in each target community?
8. Brand-handle posts where named-operator posts are the lever. The brand handle on Reddit gets 30 to 50% of the lift a named operator gets. On LinkedIn, the gap is bigger. Brand-page posts run an order of size below CEO posts on reach. Test: is the named operator the main posting handle, with the brand handle as backup?
Questions & Answers
Why does community presence matter for AI Search? AI systems pull from main-stage community talk. Reddit rates measured by Profound: 1.2% ChatGPT, 2.3% AIO, 6.3% Perplexity. ChatGPT's Reddit pull runs from 1% to 14%. Community is the cite source review pages cannot give.
What is the 90/10 rule? Ninety percent value, ten percent brand mention. The pattern that earns trust and cite share. Firm across Reddit, Quora, and most active community sites.
How much karma do I need before brand mentions are safe? Reddit: 500 subreddit-specific karma. Quora: 100 Credits and 30 strong topic answers. LinkedIn: 90 days of steady value posts. Niche forums: the community knows you by handle.
Which platforms produce the most AI cite lift? Reddit for ChatGPT and Perplexity. Quora for AIO and Gemini. LinkedIn for Copilot. Niche forums for category-specific lift.
Can we outsource community work? Partial outsourcing works for monitoring and drafts. Full outsourcing fails. Community sites spot fakes. The named operator must do the posts.
How do we handle negative talk? Reply within 48 hours. Name the exact issue. Offer to fix it offline. Post a public follow-up when it is fixed. Brand silence reads as proof of the claim.
What measurement signals matter? Reddit cite rate on target queries. Brand-mention sentiment trend. Operator-entity recognition. Community-driven referral traffic. Cite lift shows in 90 days. Entity recognition takes 6 to 12 months.
Is there a quick fit-check? Check if community sites show up in your field's AIO cites. If yes, community work is high-leverage. If no, budget moves to Pay-to-Play and Third-Party Corroboration.
