Rendered at 23:21:50 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
castral 1 days ago [-]
I think I saw Gaius Baltar implement this on Battlestar Galactica. It went well. /s Honestly seems more like a protocol for encoding a popularity contest, which is already what social media signalling does. How do you defend against self-reinforcing botnets and bad actors "cancelling" other people? I can dilute your human signal by creating massive amounts of LLM-generated noise.
uberdru 4 hours ago [-]
The fact that this won't go "web scale" seems to be its strength. The idea of local/human/authentic trust ecosystems is super powerful. "Proof of personhood" is fraught with issues, but it seems that lightweight trust algos like this do a nice job of treating trust as a human-first emergent thing, rather than trying to be a PKI style "infrastructure". Pretty cool!
> human.json is a lightweight protocol for humans to assert authorship of their site content and vouch for the humanity of others. It uses URL ownership as identity, and trust propagates through a crawlable web of vouches between sites.
This will not (and shouldn't) be used by more than a handful of people who were likely already friends anyway. I can't see it being helpful for anybody (unless accidentally visiting LLM blogspam melts your face à la Raiders of the Lost Ark) unless it's true intention is signalling you don't like LLMs to other people who don't like LLMs.
rfmc 1 days ago [-]
I've started using on my personal blog something like a LLM disclaimer with the following signal of involvement:
- None
- Formatting
- Assisted
- Written by
There's no verification for anyone to know if I'm telling the truth or not. But when I was adding it I came to the conclusion that I don't care anymore. If someone wants to close the tab, so be it. If they want to read, all good.
At least they shouldn't accuse me to having their face melted if they are in the wrong article.
petterroea 21 hours ago [-]
If you have to perform a breadth-first search from your "seed" to verify a website, wouldn't every lookup become expensive relatively quickly? Unless max hops is set really low. Id assume you really need mass adoption for 5 degrees of separation to kick in, and that's still a lot of sites to crawl!
halls-940 8 hours ago [-]
Is there a mechanism here that favors a human over a bot? It seems about the same as adding a field to robots.txt
semyonsh 1 days ago [-]
Something tells me GPG would be great for this concept, but it's probably not as accessible as to get people to paste a JSON somewhere.
outofpaper 1 days ago [-]
To the average person ab public key is about as comprehensible as JSON.
alsetmusic 1 days ago [-]
If nothing else, this at least inspired me to put a disclaimer on my own site declaring my AI policy. It's not so fancy and I think it's a good deal more credible than any formal protocol.
orsorna 1 days ago [-]
Too bad they didn't choose a more human interchange format...
evolve2k 23 hours ago [-]
I’m a bit concerned that the content of humans.json will itself get mopped up by AI crawlers.
19 hours ago [-]
deafpolygon 1 days ago [-]
Virtue signaling at best; noise at worst… It’s trivial for an AI to add, and will be done so by anyone hoping to get a piece of that attention economy…
ai-psychopath 1 days ago [-]
50 commits in 24 hours
it's hilarious that the human.json protocol to fight AI slop is itself AI slop
yladiz 1 days ago [-]
If you do a lot of small commits, it's entirely reasonable to make 50 commits in 24 hours. Looking at a few random commits they seem human generated (with potentially some copied CSS).
Maybe, before making an accusation that it is AI generated you should have some proof. Do you have any?
martin-t 22 hours ago [-]
Humans don't generate code, we write code.
I am strongly opposed to anthropomorphising autocomplete (phrases like "I asked <my favorite LLM>", "<my LLM> suggested", ...) or even referring to autocomplete+tooling as "AI" because it devalues actual human intelligence. But I've seen the opposite recently - devaluing human work by using language normally used for machines.
Maybe you didn't mean anything by it but how people talk about things shapes how they think about it (which arguably is one area where humans and LLMs are similar).
This will not (and shouldn't) be used by more than a handful of people who were likely already friends anyway. I can't see it being helpful for anybody (unless accidentally visiting LLM blogspam melts your face à la Raiders of the Lost Ark) unless it's true intention is signalling you don't like LLMs to other people who don't like LLMs.
- None
- Formatting
- Assisted
- Written by
There's no verification for anyone to know if I'm telling the truth or not. But when I was adding it I came to the conclusion that I don't care anymore. If someone wants to close the tab, so be it. If they want to read, all good.
At least they shouldn't accuse me to having their face melted if they are in the wrong article.
it's hilarious that the human.json protocol to fight AI slop is itself AI slop
Maybe, before making an accusation that it is AI generated you should have some proof. Do you have any?
I am strongly opposed to anthropomorphising autocomplete (phrases like "I asked <my favorite LLM>", "<my LLM> suggested", ...) or even referring to autocomplete+tooling as "AI" because it devalues actual human intelligence. But I've seen the opposite recently - devaluing human work by using language normally used for machines.
Maybe you didn't mean anything by it but how people talk about things shapes how they think about it (which arguably is one area where humans and LLMs are similar).