Credit: Pakin Songmor via Getty Images
On Thursday, OpenAI announced that ChatGPT users can now branch conversations into multiple parallel threads, serving as a useful reminder that AI chatbots aren't people with fixed viewpoints but rather malleable tools you can rewind and redirect. The company released the feature for all logged-in web users following years of user requests for the capability.
The feature works by letting users hover over any message in a ChatGPT conversation, click "More actions," and select "Branch in new chat." This creates a new conversation thread that includes all the conversation history up to that specific point, while preserving the original conversation intact.
Think of it almost like creating a new copy of a "document" to edit while keeping the original version safe—except that "document" is an ongoing AI conversation with all its accumulated context. For example, a marketing team brainstorming ad copy can now create separate branches to test a formal tone, a humorous approach, or an entirely different strategy—all stemming from the same initial setup.
The feature addresses a longstanding limitation in the AI model where ChatGPT users who wanted to try different approaches had to either overwrite their existing conversation after a certain point by changing a previous prompt or start completely fresh. Branching allows exploring what-if scenarios easily—and unlike in a human conversation, you can try multiple different approaches.
A 2024 study conducted by researchers from Tsinghua University and Beijing Institute of Technology suggested that linear dialogue interfaces for LLMs poorly serve scenarios involving "multiple layers, and many subtasks—such as brainstorming, structured knowledge learning, and large project analysis." The study found that linear interaction forces users to "repeatedly compare, modify, and copy previous content," increasing cognitive load and reducing efficiency.
Some software developers have already responded positively to the update, with some comparing the feature to Git, the version control system that lets programmers create separate branches of code to test changes without affecting the main codebase. The comparison makes sense: Both allow you to experiment with different approaches while preserving your original work.
While OpenAI frames the new feature as a response to user requests, the capability isn't new to the AI industry. Anthropic's Claude has offered conversation branching for over a year, allowing users to switch between branches with navigational arrow buttons.
As we've seen from recent headlines, many ChatGPT users intuitively interact with AI chatbots as if they're conversing with a consistent personality—asking ChatGPT for "its opinion" or treating responses as authoritative answers from a knowledgeable entity. This anthropomorphic approach can potentially limit productivity by encouraging users to accept a single AI-generated perspective rather than exploring multiple analytical approaches to the same problem.
As we wrote in a piece about how an AI system simulates a humanlike personality, "When you stop seeing an LLM as a 'person' that does work for you and start viewing it as a tool that enhances your own ideas, you can craft prompts to direct the engine's processing power, iterate to amplify its ability to make useful connections, and explore multiple perspectives in different chat sessions rather than accepting one fictional narrator's view as authoritative. You are providing direction to a connection machine—not consulting an oracle with its own agenda."
There is no inherent authority in AI chatbot outputs, so conversational branching is an ideal way to explore that "multiple-perspective" potential. You are guiding the LLM's outputs with your prompting. Non-linear conversational branching is just one more feature to remind you that an AI chatbot's simulated perspective is mutable, changeable, and highly guided by your own inputs in addition to the training data that forged its underlying neural network. You are guiding the outputs every step of the way.
As always, know that ChatGPT is highly capable of confabulating information on topics that aren't well-represented in the dataset and potentially misleading users in topics they aren't experts on, so your mileage may vary.