This is a chance for any users, admins, or developers to ask anything they’d like to myself, @nutomic@lemmy.ml , SleeplessOne , or @phiresky@lemmy.world about Lemmy, its future, and wider issues about the social media landscape today.

NLNet Funding

First of all some good news: We are currently applying for new funding from NLnet and have reached the second round. If it gets approved then @phiresky@lemmy.world and SleeplessOne will work on the paid milestones, while @dessalines and @nutomic will keep being funded by direct user donations. This will increase the number of paid Lemmy developers to four and allow for faster development.

You can see a preliminary draft for the milestones. This can give you a general idea what the development priorities will be over the next year or so. However the exact details will almost certainly change until the application process is finalized.

Development Update

@ismailkarsli added a community statistic for number of local subscribers.

@jmcharter added a view for denied Registration Applications.

@dullbananas made various improvements to database code, like batching insertions for better performance, SQL comments and support for backwards pagination.

@SleeplessOne1917 made a change that besides admins also allows community moderators to see who voted on posts. Additionally he made improvements to the 2FA modal and made it more obvious when a community is locked.

@nutomic completed the implementation of local only communities, which don’t federate and can only be seen by authenticated users. Additionally he finished the image proxy feature, which user IPs being exposed to external servers via embedded images. Admin purges of content are now federated. He also made a change which reduces the problem of instances being marked as dead.

@dessalines has been adding moderation abilities to Jerboa, including bans, locks, removes, featured posts, and vote viewing.

In other news there will soon be a security audit of the Lemmy federation code, thanks to Radically Open Security and NLnet.

Support development

@dessalines and @nutomic are working full-time on Lemmy to integrate community contributions, fix bugs, optimize performance and much more. This work is funded exclusively through donations.

If you like using Lemmy, and want to make sure that we will always be available to work full time building it, consider donating to support its development. Recurring donations are ideal because they allow for long-term planning. But also one-time donations of any amount help us.

  • Dessalines@lemmy.mlOPM
    link
    fedilink
    English
    arrow-up
    9
    ·
    9 months ago

    Registration applications, and user reports are the best way to handle trolls. The first stops 90% of them, the second means we can ban and remove all their spam at the click of a button.

    I don’t see how you could prematurely know about spammers or trolls until someone reports them. We don’t plan on adding any text-analyzing AI or anything like that into lemmy’s codebase.

    • Danterious@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      9 months ago

      I don’t see how you could prematurely know about spammers or trolls until someone reports them.

      I don’t think you can. My suggestion was more focused on how admins make decisions after a report. Right now they have to do a manual scan of the person’s comment history and that is the part I find inefficient. If it was possible to just show extra high level information on the user it might make it easier for the admin to make a decision.

      We don’t plan on adding any text-analyzing AI or anything like that into lemmy’s codebase.

      Yeah using AI to try and analyze comments would be overkill and probably prone to manipulation anyways.

      Edit: I’m sorta talking more specifically towards banning a user or seeing if what a user is doing is a repeated pattern.

      • Dessalines@lemmy.mlOPM
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 months ago

        I don’t think there’s a way you could avoid going into their history. I do that as an admin to verify that the account in question is indeed repeatedly breaking rules. I’m open to suggestions tho.