Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Alternatively, give the same prompt to another model and get a completely different answer. Sometimes the opposite. Or give the same prompt to the same model after its latest fine tuning and get a completely different answer. Or warm up the model with leading prompts and get a different answer.

These things are just addictive toys, nothing more.



You can make the exact same question to the same LLM and the "artificial entropy" they inject into the inference process will be enough to make up a completely different response.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: