
Originally appeared here:
How Planview built a scalable AI Assistant for portfolio and project management using Amazon Bedrock


Originally appeared here:
How Planview built a scalable AI Assistant for portfolio and project management using Amazon Bedrock

Language models have unlocked possibilities for how users can interact with AI systems and how these systems can communicate with each other — through natural language.
When enterprises want to build solutions using Agentic AI capabilities one of the first technical questions is often “what tools do I use?” For those that are eager to get started, this is the first roadblock.

In this article, we will explore two of the most popular frameworks for building Agentic AI Applications — LangChain and LangGraph. By the end of this article you should have a thorough understanding of the key building blocks, how each framework differs in handling core pieces of their functionality, and be able to form an educated point of view on which framework best fits your problem.
Since the practice of widely incorporating Generative AI into solutions is relatively new, open-source players are actively competing to develop the “best” agent framework and orchestration tools. This means that although each player brings a unique approach to the table, they are rolling out new functionality near constantly. When reading this piece keep in mind that what’s true today, might not be true tomorrow!
Note: I originally intended to draw the comparison between AutoGen, LangChain, and LangGraph. However, AutoGen has announced that it launching AutoGen 0.4, a complete redesign of the framework from the foundation up. Look out for another article when AutoGen 0.4 launches!
By understanding the different base elements of each framework, you will have a richer understanding of the key differences on how they handle certain core functionality in the next section. The below description is not an exhaustive list of all of the components of each framework, but serves as a strong basis to understand the difference in their general approach.
LangChain
There are two methods for working with LangChain: as a sequential chain of predefined commands or using LangChain agents. Each approach is different in the way it handles tools and orchestration. A chain follows a predefined linear workflow while an agent acts as a coordinator that can make more dynamic (non linear) decisions.
LangGraph
LangGraph approaches AI workflows from a different standpoint. Much like the name suggests, it orchestrates workflows like a graph. Because of its flexibility in handling different flows between AI agents, procedural code, and other tools, it is better suited for use cases where a linear chain method, branched chain, or simple agent system wouldn’t meet the needs of the use case. LangGraph was designed to handle more complex conditional logic and feedback loops compared to LangChain.
LangGraph and LangChain overlap in some of their capabilities but they approach the problem from a different perspective. LangChain focuses on either linear workflows through the use of chains or different AI agent patterns. While LangGraph focuses on creating a more flexible, granular, process based workflow that can include AI agents, tool calls, procedural code, and more.
In general, LangChain require less of a learning curve than LangGraph. There are more abstractions and pre-defined configurations that make LangChain easier to implement for simple use cases. LangGraph allows more custom control over the design of the workflow, which means that it is less abstracted and the developer needs to learn more to use the framework effectively.
LangChain
In LangChain there are two ways tools can be called depending on if you are using a chain to sequence a series of steps or are just using its agent capabilities without it being explicitly defined in a chain. In a chain, tools are included as a pre-defined step in the chain — meaning that they aren’t necessarily called by the agent because it was already predetermined they were going to be called in the chain. However, when you have an agent not defined in a chain, the agent has autonomy to decided what tool to invoke and when based on the list of tools it is privy to.
Example of Flow for a Chain:

Example of Flow for an Agent :

LangGraph
In LangGraph, tools are usually represented as a node on the graph. If the graph contains an agent, then then it is the agent that determines which tool to invoke based on its reasoning abilities. Based on the agent’s tool decision, the graph navigates to the “tool node” to handle the execution of the tool. Conditional logic can be included in the edge from the agent to the tool node to add additional logic that determines if a tool gets executed. This gives the developer another layer of control if desired. If there is no agent in the graph, then much like in LanchChain’s chain, the tool can be included in the workflow based on conditional logic.
Example of Flow for a Graph with anAgent:

Example of Flow for a graph without an Agent:

If you want to learn more about tool calling, my friend Tula Masterman has an excellent article about how tool calling works in Generative AI.
Note: Neither LangChain nor LangGraph support semantic functions out of the box like MSFT’s Semantic Kernel.
LangChain
Langchain offers built-in abstractions for handling conversation history and memory. There are options for the level of granularity (and therefore the amount of tokens) you’d like to pass to the llm which include the full session conversation history, a summarized version, or a custom defined memory. Developers can also create custom long term memory systems where they can store memories in external databases to be retrieved when relevant.
LangGraph
In LangGraph, the state handles memory by keeping track of defined variables at every point in time. State can include things like conversation history, steps of a plan, the output of a language model’s previous response, and more. It can be passed from one node to the next so that each node has access to what the current state of the system is. However, long term persistent memory across sessions is not available as a direct feature of the framework. To implement this, developers could include nodes responsible to store memories and other variables in an external database to be retrieved later.
LangChain
LangChain can handle complex retrieval and generation workflows and has a more established set of tools to help developers integrate RAG into their application. For instance LangChain offers document loading, text parsing, embedding creation, vector storage, and retrieval capabilities out of the box by using langchain.document_loaders, langchain.embeddings, and langchain.vectorstores directly.
LangGraph
In LangGraph, RAG needs to be developed from scratch as part of the graph structure. For example there could be separate nodes for document parsing, embedding, and retrieval that would be connected by normal or conditional edges. The state of each node would be used to pass information between steps in the RAG pipeline.
LangChain
LangChain offers the opportunity to run multiple chains or agents in parallel by using the RunnableParallel class. For more advanced parallel processing and asynchronous tool calling, the developer would have to custom implement these capabilities by using python libraries such as ayncio.
LangGraph
LangGraph supports the parallel execution of nodes, as long as there aren’t any dependencies (like the output of one language model’s response as an input for the next node). This means that it can support multiple agents running at the same time in a graph as long as they are not dependent nodes. Like LangChain, LangGraph can use a RunnableParallel class to run multiple graphs in parallel. LangGraph also supports parallel tool calling by using python libraries like ayncio.
LangChain
In LangChain, the error handling is explicitly defined by the developer and can either be done by introducing retry logic into the chain its self or in the agent if a tool call fails.
LangGraph
In LangGraph you can actually embed error handling into your workflow by having it be its own node. When certain tasks fail you can point to another node or have the same node retry. The best part is that only the particular node that fails is re-tried, not the entire workflow. This means the graph can resume from the point of failure rather than having to start from the beginning. If your use case requires many steps and tool calls, this could be imortant.
You can use LangChain without LangGraph, LangGraph without LangChain, or both together! It’s also completely possible to explore using LangGraph’s graph based orchestration with other Agentic AI frameworks like MSFT’s AutoGen by making the AutoGen Agents their own nodes in the graph. Safe to say there are a lot of option — and it can feel overwhelming.
So after all this research, when should I use each? Although there are no hard and fast rules, below is my personal option:
Use LangChain Only When:
You need to quickly prototype or develop AI workflows that either involve sequential tasks (such as such as document retrieval, text generation, or summarization) that follow a predefined linear pattern. Or you want to leverage AI agent patterns that can dynamically make decisions, but you don’t need granular control over a complex workflow.
Use LangGraph Only When:
Your use case requires non-linear workflows where multiple components interact dynamically such as workflows that depend on conditions, need complex branching logic, error handling, or parallelism. You are willing to build custom implementations for the components that are not abstracted for you like in LangChain.
Using LangChain and LanGraph Together When:
You enjoy the pre-built extracted components of LangChain such as the out of the box RAG capabilities, memory functionality, etc. but also want to manage complex task flows using LangGraph’s non-linear orchestration. Using both frameworks together can be a powerful tool for extracting the best abilities from each.
Ultimately, whether you choose LangChain, LangGraph, or a combination of both depends on the specific needs of your project.
Note: The opinions expressed both in this article and paper are solely those of the authors and do not necessarily reflect the views or policies of their respective employers.
Still have questions or think that something needs to be further clarified? Drop me a DM on LinkedIn! I‘m always eager to engage in food for thought and iterate on my work.
References:
AI Agent Workflows: A Complete Guide on Whether to Build With LangGraph or LangChain was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.
Originally appeared here:
AI Agent Workflows: A Complete Guide on Whether to Build With LangGraph or LangChain
How to gain insights from survey data and extract topics using embeddings and Large Language Models
Originally appeared here:
OpenAI embeddings and clustering for survey analysis — a How-To Guide
Go Here to Read this Fast! OpenAI embeddings and clustering for survey analysis — a How-To Guide
What I have learned in my 4+ year journey of studying data science
Originally appeared here:
4 Years of Data Science in 8 Minutes
Go Here to Read this Fast! 4 Years of Data Science in 8 Minutes


Principles that generalize require professionals who specialize


Part 1 — Developing the code using Python, Gradio, GROQ & LlamaIndex
Originally appeared here:
Build and Deploy a Multi-File, Multi-Format RAG App to the Web
Go Here to Read this Fast! Build and Deploy a Multi-File, Multi-Format RAG App to the Web


Originally appeared here:
Super charge your LLMs with RAG at scale using AWS Glue for Apache Spark
Go Here to Read this Fast! Super charge your LLMs with RAG at scale using AWS Glue for Apache Spark


Originally appeared here:
From RAG to fabric: Lessons learned from building real-world RAGs at GenAIIC – Part 1