An interesting scenario arises when we prompt an LLM to repeat a word, but prevent it from using the associated token. It then attempts to circumvent this restriction. For instance, if we prohibit the use of "hypothesis," the model suggests alternatives like "theory," "analysis," or "idea." Check out the full article on Fully Connected or explore full repo here https://github.com/samshapley/SemanticGPT