In November 2025, GummySearch lost its Reddit API license. This wasn't a gradual phase-out. It was immediate. The tool that dominated community research on Reddit, the one that product managers and founders relied on for pain point discovery, was gone. GummySearch announced a full sunset by November 2026, and the research community was left scrambling for alternatives.

I watched this happen in real time. I had been using GummySearch to track signal patterns across communities, and when the license revoked, I realized something important: I was dependent on a tool I didn't control, running on infrastructure owned by someone else, with terms that could change without warning.

What Replaced It

After the GummySearch collapse, seven commercial competitors emerged. I mapped out how they compare across the critical dimensions for serious research work.

Tool Business Model Platforms Supported Open Source Scoring Transparency
PainOnSocial Tiered SaaS Reddit No Proprietary
Reddinbox Pay-per-query Reddit No Proprietary
RedReach Monthly subscription Reddit, Twitter No Proprietary
BuzzAbout SaaS subscription Reddit, Twitter, Facebook, TikTok No Proprietary
Awario Monthly subscription Multi-platform social No Proprietary
SubredditSignals SaaS subscription Reddit No Proprietary
Apify Reddit Pain Finder Monthly subscription Reddit No Proprietary

The pattern was clear: all closed-source, all paid, most Reddit-only, none transparent about how they score or rank signals. Most vendors don't publish their scoring methodology. You feed data in, a number comes out, and you have to trust it works. You can't modify the algorithm, weight the inputs differently, or understand why one signal ranked higher than another. That's a constraint I wasn't willing to accept.

The Gap I Saw

I needed a tool that did several things at once. First, it had to cover multiple communities. Reddit data is valuable, but power users are everywhere: Hacker News, GitHub Discussions, Product Hunt, Indie Hackers. If you only search Reddit, you miss critical audience segments. Second, it had to be transparent about scoring. I wanted to see the formula, adjust the weights, and run custom analysis. Third, it had to be mine to run, not rented from a vendor. No API key revocation fears. No surprise pivot. No sunset clause.

No existing tool checked all three boxes. The open source alternatives were either educational projects or basic scrapers with no signal detection logic. The commercial tools handled signal detection but locked the methodology away and limited platform coverage.

So I built one.

What ScopeScrape Is (And Isn't)

ScopeScrape is an early-stage open source CLI tool. I'm shipping it under the MIT license because this space deserves better than vendor lock-in, and the research community benefits when tools are transparent and self-hosted.

What it is: a Python command-line tool that searches multiple community platforms for signal phrases, ranks them by configurable scoring rules, and exports structured data for analysis. It currently supports Reddit and Hacker News. Version 0.2 will add GitHub Discussions and Product Hunt. The signal detection uses VADER sentiment analysis and phrase matching across four severity tiers. Scoring is fully configurable; you choose how to weight frequency, intensity, specificity, and recency.

What it isn't: a polished product. This is not a startup. It's not a SaaS platform. There's no pricing page, no enterprise tier, no onboarding wizard. It's a tool, built in public, to solve a real problem. It's v0.1. The code is rough in places. Documentation is minimal. Platform coverage will expand as time permits.

It's honest about what it is, and I think that's more useful than a false sense of maturity.

Technical Decisions Worth Noting

A few implementation choices shaped the project. I used PRAW for Reddit instead of raw HTTP calls because the Reddit API requires careful session handling and PRAW abstracts away the boilerplate. For Hacker News, I used the Algolia search API rather than scraping the site directly; it's faster, it's officially supported, and it avoids infrastructure load. For sentiment analysis, I chose VADER over TextBlob because it's trained on social media text and handles informal language better. For the CLI interface, I picked Click over argparse because it's more composable for subcommands and cleaner for export format handling.

None of these are novel choices. They're pragmatic ones, chosen based on the constraints of the problem and the tools available. They work, they're well-maintained, and they don't add unnecessary dependencies.

Where This Goes From Here

ScopeScrape is on GitHub at v0.1. The docs, blog, and codebase are public at scopescrape.earnedconviction.com. I'm building this in the open because real feedback beats assumptions, and the indie hacker community has made that the standard for serious projects.

If you're interested in the technical implementation, there are deeper posts on this blog about signal detection and scoring logic. If you want to try it, the installation and quickstart are in the docs. If you want to contribute, fork it and open a PR. The barrier to entry is intentionally low because this is meant to be community-owned from the start.

GummySearch's sunset taught me that tools we depend on need to be tools we can own. ScopeScrape is my answer to that lesson.