How I Automate Engagement on Reddit

swordfish11

New member
Hi everyone - yesterday I posted a link offering to make demos and it got a good response and changed my opinion of Reddit. I had always been a lurker, but I guess just in the wrong places-- r/SaaS is awesome!

I wanted to share how I do this because I think you will appreciate my strategy. The idea is to have some type of value exchange
  1. I make you a demo - You give me engagement on Reddit.
  2. Then I make a few demos
  3. See what it takes to automate as much of the process as I can
  4. Finish the demos
  5. Share how I did it on each platform, trying multiple messages on those that allow, until I get it right.
I worked with ChatGPT to come up with a script that outputs this Notion Checklist where I can respond to each comment in turn. Check this out:

https://scythe-foxglove-abc.notion.site/Reddit-Growth-27376176dcf7457785c58511719b984e?pvs=4

Here's the code, and I'll help you get it up in running if you want -- but this is meant to be more just illustrative of my process and the way I approach problems like this. -- again I figured you all would appreciate, and if I'm doing something dumb or wrong - let me know!!!

Code:
import subprocess
import re
import os
import praw
import sys


def copy_to_clipboard(text):
    try:
        if os.name == "posix":  # Unix/Linux
            subprocess.run("pbcopy", universal_newlines=True, input=text)
        elif os.name == "nt":  # Windows
            subprocess.run("clip", universal_newlines=True, input=text)
        else:
            print("Sorry, don't know how to copy to clipboard on your OS.")
    except Exception as e:
        print(f"Failed to copy text to clipboard: {e}")


def find_urls(text):
    # Regex to find URLs
    url_pattern = r"\b(?:\w+-?)+(?:\.(?:\w+-?)+)+\b"
    return re.findall(url_pattern, text)


def main(url):
    reddit = praw.Reddit(
        client_id=os.getenv("REDDIT_CLIENT_ID"), client_secret=os.getenv("REDDIT_SECRET"), user_agent="python"
    )
    submission = reddit.submission(url=url)
    data = []
    for comment in submission.comments.list():
        if comment.author == "demofunjohn":
            continue
        urls = find_urls(comment.body)
        urls = list(set(item.lower().replace("www.", "") for item in urls))
        data.append((comment.author, urls, f"https://www.reddit.com{comment.permalink}"))

    # Sort the data based on the number of URLs
    sorted_data = sorted(data, key=lambda x: len(x[1]), reverse=True)

    # Store the output in a buffer
    buffer = url
    buffer += "\n\n| Status | Username | Permalink | URLs | Results |\n"
    buffer += "|----------|----------|-----------|------|--------|\n"
    for author, urls, link in sorted_data:
        buffer += f"| | {author} | [Link]({link}) | {', '.join(urls)} |  |\n"

    # Print or use the buffer as needed
    copy_to_clipboard(buffer)
    print("Copied")


if __name__ == "__main__":
    if len(sys.argv) != 2:
        print("Usage: python script.py ")
        sys.exit(1)
    url = sys.argv[1]
    main(url)

```

cc: @mocoman97 @voicesheard (my friends from r/SaaS)
 
Back
Top