Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can require that factual statements require source reference.

Statement that "pit bulls are the most dangerous problem in America" requires source data (ie. cause of death or serious injuries in 2024 in USA).

Publications can be signed by authorities (ie. university or government body).

IMHO sooner or later we will (have to) end up with system like that.

Every information will be signed and level of trust will be automatically established based on your preference who you trust.



Such a publication would not explicitly come out and say “pit bulls are the most dangerous problem in America”. That’s something that can be easily falsified.

They would say something like “learn the truth about pit bulls” and then feed you an endless barrage of attack footage and anecdotes and emotionally charged information.

The purpose is to shape your priors. If all you see is pit bulls attacking people, your subconscious will rate them more risky. You may not even be able to verbalize why you changed your opinion.


People say that in the future all information will not be directly ingested by people – instead everybody will have a "filter" similar to how we use spam filters, but it'll rewrite information (removing misinformation, adjusting bias, adding references, summarizing and/or expanding <<probably more rare>> etc).

I believe this future (all information being like this) is not far off and it has decent usage percentage already judging from direct traffic decline on some well known information source websites.

Perplexity, phind (as well as upstream chat interfaces now) support internet searching (exploring?) already which does it.

When reading (news and other) articles I find myself more and more often reading them through LLMs to perform above steps. If somebody never tried it, it's really worth, especially for politically biased news articles.

I believe this shift in information consumption is happening more and more for everybody.

Everything will become indirect, likely with multiple layers (ie. extra layer at OS level is likely – this is frankly perfect for use cases like protecting minors: it would be great if you can safely give laptop to your kid knowing that there is ai based content filter you've setup for their age group).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: