A little insane, but in a good way.
If I remember correctly, the properties the API returns are comment_score
and post_score
.
Lemmy does have karma, it is stored in the DB, and the API returns it. It just isn’t displayed on the UI.
It only handles HTML currently, but I like your idea, thank you! I’ll look into implementing reading PDFs as well. One problem with scientific articles however is that they are often quite long, and they don’t fit into the model’s context. I would need to do recursive summarization, which would use much more tokens, and could become pretty expensive. (Of course, the same problem occurs if a web page is too long; I just truncate it currently which is a rather barbaric solution.)
We should do an AmA with her!
Lemmy actually has a really good API. Moderation tools are pretty simple though.
Here people actually react to what I post and write. And they react to the best possible interpretation of what I wrote, not the worst. And even if we disagree, we can still have a nice conversation.
Does anyone have a good theory about why the threadiverse is so much friendlier? Is it only because it’s smaller? Is it because of the kind of people a new platform like this attracts? Because there is no karma? Maybe something else?
Did I miss something? Or is this still about Beehaw?
Made the switch 4 years ago. No regrets.
Hungarian here. It is safe to drink without boiling. People only boil water for baby formula to be extra safe.
You can also use it as a PWA, it will be just like a normal app
First, thank you for the detailed response.
Second, I think you finally convinced me to delete my FB. I will link to this comment wherever possible to show people what a terrible company Meta is.
After all, they said we need quality content to attract new users
They got gregnant
Yeah, the situation seems pretty clear
I’m the author of that bot. It will have an opt-out option, I implemented it as soon as someone suggested it:
https://programming.dev/comment/305938
Don’t spread sensationalist lies.
Oh wow, I’ve just realized it was OP I talked to in the comments. I immediately replied to their suggestion. What a clown 🤡
Can you tell us more about what they are like?
Finally I could get into the beta and all I can say is wow, I’m in love with this app 🤩
Keep up the good work!
LLMs can do a surprisingly good job even if the text extracted from the PDF isn’t in the right reading order.
Another thing I’ve noticed is that figures are explained thoroughly most of the time in the text so there is no need for the model to see them in order to generate a good summary. Human communication is very redundant and we don’t realize it.