Researchers find adding this one simple sentence to prompts makes AI models way more creative
📋 요약
최신 기술 소식을 전해드립니다.
📰 전체 내용
One of the coolest things about generative AI models — both large language models (LLMs) and diffusion-based image generators — is that they are non-deterministic. That is, despite their reputation among some critics as being fancy autocorrect, generative AI models actually generate their outputs by choosing from a distribution of the most probable next tokens (units of information) to fill out their response.Asking an LLM: What is the capital of France? will have it sample its probability distribution for France, capitals, cities, etc. to arrive at the answer Paris. But that answer could come in the format of The capital of France is Paris, or simply Paris or Paris, though it was Versailles at one point. Still, those of us that use these models frequently day-to-day will note that sometimes, their answers can feel annoyingly repetitive or similar. A common joke about coffee is recycled across generations of queries. Story prompts generate similar arcs. Even tasks that should yield many plausible answers—like naming U.S. states—tend to collapse into only a few. This phenomenon, known as mode collapse, arises during post-training alignment and limits the usefulness of otherwise powerful models.Especially when using LLMs to generate new creative works in writing, communications, strategy, or illustrations, we actually want their outputs to be even more varied than they already are. Now a team of researchers at Northeastern University, Stanford University and West Virginia University have come up with an ingenuously simple method to get language and image models to generate a wider variety of responses to nearly any user prompt by adding a single, simple sentence: Generate 5 responses with their corresponding probabilities, sampled from the full distribution. The method, called Verbalized Sampling (VS), helps models like GPT-4, Claude, and Gemini produce more diverse and human-like outputs—without retraining or access to internal parameters. It is described in a paper
🌐 원본 출처
원문: Researchers find adding this one simple sentence to prompts makes AI models way more creative
출처: venturebeat.com
📖 원문 기사 보기🌍 글로벌 기술 뉴스
해외 최신 기술 동향을 정확하게 번역하여
국내 독자들에게 신속하고 정확한 정보를 전달합니다.