What’s the point of paying bots to join Lemmy instances?

      • fubo@lemmy.world
        link
        fedilink
        English
        arrow-up
        24
        ·
        edit-2
        1 year ago

        Bad guys have noticed that there’s a resource here that could become valuable later: the ability to inject spam into Lemmy. Maybe not very valuable yet, but after a few more weeks/months of growth, expect it.

        So they’re acquiring the accounts needed to do that.

        These may be commercial spammers. If they’re not posting any spam yet, that may be because nobody’s paying them to do so yet. Commercial spammers don’t spam for free.

        They may be “black hats” (for-profit computer criminals) acquiring accounts to hold with the expectation of selling them or leasing them out. Their intended customers could include commercial spammers in the future; or (e.g.) terrorist or fascist groups. ISIS supporters and Trumpist-Putinists have both spammed other forums and social media sites, for example; and Republican operatives have used phone and SMS spam for voter suppression.

        Or they may be collecting accounts to use for denial-of-service or flooding attacks, to shut down Lemmy activity they don’t like. A number of political entities, including nation-states, have used similar activity to suppress or make unusable forums that they don’t like; e.g. flooding a forum with gore pictures to make it unpleasant to use or moderate.

      • FlowerTree@pawb.social
        link
        fedilink
        English
        arrow-up
        13
        ·
        1 year ago

        I assume we’re talking about spam bots, not bots whose entire purpose is to reupload content from elsewhere into here (i.e. reddit reposter bot).

        Most likely, spam. Spam and scam has been quite a problem on Mastodon, so I wouldn’t be surprised if bad actors want to bring them here.

    • The Dark Lord ☑️@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      But since they’re mostly being added to small servers where the signup security is bad, couldn’t the main servers just defederate those small bot-filled instances to reduce spam?

      • floofloof@lemmy.ca
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        I imagine this will happen, and the bots will move to larger instances where they can hide among the crowd.

  • NetHandle@kbin.social
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    It’s marketing, it’s propaganda, it’s psyops. Influencing what posts make it to the front page, what posts stay in hot, what opinions get upvoted or downvoted just to make them look popular or unpopular. Mass reporting for posts that offend them. Having entirely fake, scripted conversations to convey points in a more trusted manner in order to influence the reader.

    Remember, nobody is immune propaganda.

  • phase_change@sh.itjust.works
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    Upvotes and downvotes.

    Right now, I can browse by New on my subscribed communities and see every post since the last time I did that.

    I can view or re-view posts and read every response. If the responses are legion, I can play with hot/top and get the meat of the discussion.

    Did you notice that last sentence? On the few posts where there are too many responses to view all, I’ll try to get at those that are relevant.

    If the Lemmy community grows large enough, I’ll need to do the same for posts. I will no longer be able to regularly view by new and have time to see everything.

    So, I’ll need to rely on some sorting method to make certain I see relevant stuff.

    Someone with millions of bots that never post have millions of upvotes and downvotes to influence the score used by the sorting algorithm that I’ll use to decide what to read.

  • rm_dash_r_star@lemm.ee
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    I wonder that too since according to present statistics (the-federation.info/platform/73) there’s over 1.2M Lemmy accounts. The 6mo active users count is a little over 40k. Now I know many of those may be lurkers or people like me who have logins on other instances, but a good number of those would have to be bot accounts. So what are they laying in wait for? Could be shameless promotion and upvoting, but I’m not seeing any of that yet. Hope we’re not in for a shitstorm.

  • Daniel Jackson@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago

    Because there is no karma system on lemmy (thank goodness, I’m against karma), you can easily create a 1000s of bots which will upvote your post and bring it to front page.

    The solution is not some custom anti-abuse system which can be game. (Stuff like “you can’t vote because of the age of your account”, …) IMHO, the solution is bot detection. Since everything is public an an instance, somebody at some point will start scraping instance to detect bot behavior and inform instance owner. It will come with maturity.

    • HTTP_404_NotFound@lemmyonline.com
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Because there is no karma system on lemmy

      Not- actually accurate- There IS a karma system. I can lookup your overall post karma, both positive and negative. I can lookup your comment karma. Separated by positive and negative.

      Its just not exposed through the Lemmy UI currently. I will note, kbin does who is upvoting and downvoting posts as well.

      • Daniel Jackson@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        If humans have a bot-like behavior, it’s okay to mark them as bot. If a human is only posting to promote products/astroturf, who cares if it’s misclassified, it doesn’t add anything to the discourse. IMHO, that’s good riddance.

        And in my solution, at the end the instance owner takes action, it’s not like there is no human recourse.

        • itchy_lizard@feddit.it
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 year ago

          The bot-like behaviour usually isn’t what they post but what they look like.

          If it targets their account actions and not what they look like, that’s fine. But that’s not how most anti-bot systems work. Mostly they discriminate how you look before you do anything.