Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Forcing LLM's to repeat a word, but ban the word using logit bias (wandb.ai)
5 points by samshapley on July 21, 2023 | hide | past | favorite | 1 comment


An interesting scenario arises when we prompt an LLM to repeat a word, but prevent it from using the associated token. It then attempts to circumvent this restriction. For instance, if we prohibit the use of "hypothesis," the model suggests alternatives like "theory," "analysis," or "idea." Check out the full article on Fully Connected or explore full repo here https://github.com/samshapley/SemanticGPT




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: