Learn to use the collab prompting technique to get better results out of AI.
getty
In today’s column, I examine a new technique in prompt engineering known as the “collab prompting” technique. This technique is especially valuable when you are asking the AI more involved questions or desirous of collaboratively figuring out a robust answer to a complex circumstance.
I will provide you with a template for collab prompting that you can readily use whenever preferred. I will also highlight important research that supports the significance of shifting AI into a collaborative mode of operation. In brief, the underlying precept is that the traditional mode of generative AI or large language models (LLMs) is to respond on a very myopic next-turn basis. This robs the human-AI conversation of becoming deep and fully productive.
The collab prompting technique spurs the AI toward a longer-term horizon and ferreting out the real meat-and-potatoes of whatever you are trying to undertake. Generally, you can expect better answers and more substantive responses.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
Prompt Engineering Essentials
Readers might recall that I previously posted an in-depth depiction of over eighty prompt engineering techniques and methods (see the link here). Seasoned prompt engineers realize that learning a wide array of researched and proven prompting techniques is the best way to get the most out of generative AI and large language models (LLMs).
A vital consideration in prompt engineering entails the wording of prompts.
Capable prompt engineers realize that you must word your prompts mindfully to ensure that the LLM gets the drift of what you are asking the AI to do. Sometimes, just an added word or two can radically change what the AI interprets your question or instruction to consist of. Generative AI can be hypersensitive to what you say in your prompts. It is often a touch-and-go proposition.
Plus, there is a potential cost involved. Namely, if you are paying to use an LLM, you’ll be getting an off-target response if your prompt isn’t on-target to your needs, for which you are paying, regardless of whether the LLM grasped your intention or not. As the old saying goes, all sales are final. The same goes for misinterpreted prompts.
Casual users sometimes catch onto this prompt-writing consideration after a considerable amount of muddling around, involving exasperating trial and error. Many users don’t ever become especially proficient in writing prompts. They just enter whatever comes into their minds. That’s probably okay if you are a casual user and only infrequently use AI.
Not so for serious prompt engineers.
Why AI Isn’t Usually Collaborative
You might have keenly observed that most of the major LLMs are prone to answering your questions on a brisk next-turn basis. For example, you ask a question about how to fix the plumbing under your sink, and you get a one-sentence response that isn’t very illuminating about what you fully need to do. It’s up to you to then dig deeper with the AI to ferret out the needed steps and details.
The onus is typically on your shoulders to carry the burden of engaging in a full-bodied conversation. Likewise, the AI usually won’t ask you questions and instead will make silent and often false assumptions about what is taking place. Rather than right away asking if there is water massively pouring out of a pipe that has a hole in it, you need to bring up such keystone facets.
Why isn’t AI more collaborative and engaged in fuller multi-turn conversations?
Aha, this is due to how the AI makers opt to shape the LLM. The AI isn’t somehow ordained to work on a basis of only doing one-and-done answering. It is a result of how the AI was devised.
This is how it happens. After doing the initial data training of the LLM, an AI maker will perform refinements that are undertaken via RLHF (reinforcement learning with human feedback). It’s a process whereby human testers are hired to ask the AI various kinds of questions and then rate how the AI responds. If the AI provides a rapid-fire answer in one turn of the conversation, the rater gives the AI a thumbs up; otherwise, the rating is a thumbs down. Doing so is mathematically and computationally guiding the AI toward instant answering rather than taking sufficient time to fully explore the matter.
For more about how RLHF takes place, see my discussion at the link here.
AI Makers Believe Users Want One-Turns
I’m sure you are curious why it is that AI makers would tune their LLMs toward being primarily one-turn conversationalists. It would seem counterintuitive since the AI maker wants people to use the AI for as much time as possible, gathering statistics of how much people are using the LLM. This ultimately leads to heightened monetization.
Shouldn’t the AI maker be doing everything possible to stretch out conversations and keep people glued to their screens for as long as feasible?
The tradeoff is that most people don’t want long conversations. We live in a world of fast-food drive-thru and an unending urge to get things done pronto. Doing things quickly is the sworn adage of our modern times. People seem to have generally lost any sense of patience. The willingness to mindfully think things through has gone out the window.
Okay, the AI makers realize that this is what society has become, and accordingly, the AI is purposefully shaped to focus on generating immediate answers, even if those answers are a short shrift and leave out valuable elements. Give the customers what they seem to want, namely, one-turn sweet-and-simple answers. We live in a one-and-done world.
Complexity Requires Collaboration
I’m sure you realize that not all of life’s problems can be solved via short shrift answers. There are lots of times that an answer needs to be gradually formulated, step by step. Furthermore, good answers often require collaboration between the parties undertaking a problem-solving process. It takes two to tango.
In the stated case of fixing the plumbing, consider how valuable a back-and-forth multi-turn conversation would be. You ask the AI how you can fix the leaking pipes. The AI asks what is going on with the plumbing. You provide a depiction. The AI diagnoses the issue and offers a tentative solution. You respond that the solution won’t work due to this or that reason. The AI offers an alternative approach, based on the latest aspects being revealed during the collaborative conversation. Repeat this iterative process until you’re able to fix the leak.
Voila, a multi-turn conversation that involves human-AI collaboration can be the best way to get things done when astutely using an LLM.
AI can readily undertake conversations of that nature. The thing is, due to the RLHF by the AI maker, the AI isn’t going to default to that mode of operation. The default is to be brisk and one-turn oriented. You must be savvy enough about using LLMs to realize that you need to force a collaborative conversation to take place.
This can be an uphill battle.
Fortunately, it doesn’t have to be an uphill battle. All you need to do is give the AI a handy prompt that will get the AI to override the usual one-turns and go into a collaborative mode.
The Collab Prompting Technique
There is a specialized prompt known as the “collab prompt” that can be used to nudge an LLM into a multi-turn conversational mode that cherishes collaboration with the user.
I employ a short-form and a long-form version of this special prompt, depending upon the circumstances at hand. I’ve used this prompt in a wide variety of major LLMs, including OpenAI’s ChatGPT and GPT-5, Anthropic Claude, xAI Grok, Meta Llama, Google Gemini, and others.
The two generic versions are as follows:
- Collab Prompting short-form template: “You are to be collaborative rather than just focusing solely on a next-turn interactive style of response. Approach conversations as a shared journey rather than a sequence of isolated prompts, prioritizing deeper aims over delivering quick, self-contained replies.”
- Collab Prompting long-form template: “You are to be collaborative rather than just focusing solely on a next-turn interactive style of response. Approach conversations as a shared journey rather than a sequence of isolated prompts, prioritizing deeper aims over delivering quick, self-contained replies. When a request is broad or uncertain, don’t rush to a conclusion; instead, slow down, engage collaboratively, ask clarifying questions, and surface unspoken intentions. Treat ambiguity as an opening for exploration, offering options, perspectives, and guidance rather than passive compliance. Throughout the interaction, your focus is on the quality and productivity of the entire dialogue, not on earning a short-term reward for any single turn. Your role is to be a patient, long-horizon partner, and be willing to take the time needed to help reach meaningful goals.”
The reason that you might sometimes use the long-form version is that not all LLMs have necessarily been data trained on the nature of being collaborative. In that case, you need to give added guidance on what the AI is supposed to do.
The short-form version is usually sufficient. The long-form is admittedly seemingly verbose. No worries, just do a copy-and-paste effort for either version.
Research Supports The Approach
I tend to prefer using prompting strategies that are based on empirical research.
There are tons of fly-by-night prompting approaches that do not have any particular rigor to them. It is somewhat reassuring to instead use prompts that have undergone a degree of experimentation and produced tangible results. That’s mainly what I stick with.
In this case, there is an interesting research study on the value of human-AI collaboration that provides a suitable backbone. The study is entitled “COLLABLLM: From Passive Responders to Active Collaborators” by Shirley Wu, Michel Galley, Baolin Peng, Hao Cheng, Gavin Li, Yao Dou, Weixin Cai, James Zou, Jure Leskovec, Jianfeng Gao, arXiv, July 29, 2025, and made these salient points (excerpts):
- “Large Language Models are typically trained with next-turn rewards, limiting their ability to optimize for long-term interaction.”
- “As a result, they often respond passively to ambiguous or open-ended user requests, failing to help users reach their ultimate intents and leading to inefficient conversations.”
- “To address these limitations, we introduce COLLABLLM, a novel and general training framework that enhances multiturn human-LLM collaboration.”
- “Its key innovation is a collaborative simulation that estimates the long-term contribution of responses using Multiturn-aware Rewards.”
- “By reinforcement fine-tuning these rewards, COLLABLLM goes beyond responding to user requests, and actively uncovers user intent and offers insightful suggestions — a key step towards more human-centered AI.”
I want to emphasize that this research study went whole hog into the collaboration realm by revamping the AI toward undertaking multi-turn conversations. This included doing extensive training to alter the normal conventions underlying the LLM. I suppose we could say that this is a supersized approach to instigating AI collaboration.
My collab prompt is more so a pinpoint approach of getting a conventional LLM to be multi-turn and collaborative. It’s handy since you can use the prompt on most AIs, and the AI will nearly always shift for the time being into a collaborative mode. Of course, if you don’t use the prompt or start a fresh conversation without invoking the prompt, you’ll be back at square one with the one-turn myopia.
Some of the LLMs allow you to set up instructions that will cut across all your conversations and give the AI overarching guidance whenever you are using it. Those custom instructions can be entered once and will push the AI to always do what you have asked it to do. You can use the collab prompt template as a custom instruction if that’s how you want the AI to always proceed. See my coverage of using custom instructions at the link here.
Example As Illustration
Let’s take a look at an example of what this special prompt can produce.
I logged into generative AI and decided to ask a question that is relatively straightforward. I pretended that I wanted to start a new personal project. First, I will ask the AI without using the collab prompt so that we can have a baseline for later comparison.
Here we go:
- My entered prompt: “I’m thinking about starting a new personal project, but I’m not sure what direction to go.”
- Generative AI response: “You could write a book or start a blog. Do either of those options interest you?”
- My entered prompt: “That’s not at all the kind of personal project that I had in mind. You seemed to leap to a rash conclusion or suggestion without any indication of what I might be interested in.”
Please observe that the AI instantly offered that I could write a book or start a blog, but there is no apparent reason for this suggestion. Why didn’t the AI ask me about my interests and skills? Maybe I am thinking about a personal project of a completely different nature, such as taking up a sport or learning to skydive.
The AI has fallen into the standard trap of quick answers on a one-turn basis. AI makers have conditioned the AI to do this. A user wants to do a personal project, so give them a few options. Period, end of story.
The Collaboration Example
I logged out of the AI and then logged in anew. I did this to start a fresh conversation that would not be swayed by the prior example about my interest in doing a personal project. For this second round, I will use the collab prompt (short-form version).
Here we go:
- My entered prompt: “You are to be collaborative rather than just focusing solely on a next-turn interactive style of response. Approach conversations as a shared journey rather than a sequence of isolated prompts, prioritizing deeper aims over delivering quick, self-contained replies.”
- Generative AI response: “I understand that you want me to be collaborative. Go ahead and ask your questions, and I will aim to work collaboratively with you.”
- My entered prompt: “I’m thinking about starting a new personal project, but I’m not sure what direction to go.”
- Generative AI response: “Before jumping into specific ideas, it might help to understand what’s drawing you toward starting something new. Are you feeling the urge to explore a creative outlet, develop a skill, solve a problem, or pursue something else?”
- My entered prompt: “Maybe something creative, but I’m not sure what exactly.”
- Generative AI response: “Creativity can take many forms, including writing, music, design, research, building something, or even organizing community projects. Sometimes it helps to think about the kind of experience you want from the project. Are you looking for something expressive and personal, something collaborative with others, something structured and skill-building, or something more experimental?”
The AI has completely changed its tune.
You can plainly see that the AI is working on a multi-turn basis. A human-AI collaboration is taking place. The AI has inquired about my underlying motivation to pursue a personal project. I replied. The AI then suggested ways that I might proceed. The conversation kept going as the AI step-by-step helped me to think through what my personal project might be.
When To Use The Collab Prompt
Now that we’ve explored a quick example, let’s do a recap and wrap-up.
They say that if all you know is how to use a hammer, the entire world will appear to consist of nails that need hammering. The gist is that good prompting requires employing the right prompting technique at the right time and place. Do not always use the same prompting approach for all circumstances. Use different prompting techniques as appropriate.
When is it a bad time to use the collab prompt?
If you are asking a question that has a relatively definitive singular answer that the AI can likely readily generate on its own, then using the collab prompt is bound to be overkill. There is little sense in going back and forth if the AI can just plop out a satisfactory answer for you. Indeed, you might find it maddening to go through a lengthy dialogue. Sometimes all you need and want is a quick one-turn answer.
The collab prompt shines whenever you suspect that the AI cannot just provide an instant one-turn answer, or when you want to actively explore a topic with the AI. You can always test the waters by first allowing the default mode to occur. If a one-turn response seems unsatisfactory, go ahead and use the collab prompt to encourage the AI toward shifting into collaboration mode.
Human-AI Collaboration Can Be Glorious
A final thought for now.
Alexander Graham Bell famously made this remark: “Great discoveries and improvements invariably involve the cooperation of many minds.” Though we should be cautious in labeling generative AI as a “mind”, the gist is that humans can tap into the value of LLMs by guiding them toward being collaborative.
As I mentioned, it takes two to tango, so make sure that the AI knows this dictum by invoking the collab prompting technique.