Build Your Own Blog Generator AI Agent Using LangGraph and Groq
In today’s fast-paced digital world, leveraging AI to automate tasks has become essential for staying ahead. When it comes to content creation, the ability to produce high-quality, engaging blogs at scale can be a game changer. Whether you’re a writer, marketer, or entrepreneur, AI saves valuable time and enhances creativity by streamlining repetitive processes. This blog delves into how you can harness the power of an AI-driven Blog Generator Agent to craft informative blogs with the help of tools like LangGraph, and Groq.
Prerequisites
Before getting started, ensure you have the following:
- Basic Python knowledge.
- Understanding of LangGraph- The below video is a great place to start.
3. API keys- Access to Groq API(Open source) key.
https://www.youtube.com/watch?v=IMPYpUehJKE&ab_channel=InvertedStone
How Does the Blog Generator Work?
At its core, the blog generator is a two-step workflow:
- Title Generation: The AI generates an engaging title based on a user-provided topic.
- Content Generation: It then writes a detailed blog post using the generated title as inspiration.
This process is implemented using LangGraph’s StateGraph, which defines the workflow as a series of nodes connected by edges.
Building the Blog Generator Agent
Here’s a step-by-step explanation of how this blog generator is built:
- Setting Up the Environment
Start by installing the required libraries, ideally in a Conda virtual environment.
pip install langgraph langchain-core langchain-groq python-dotenv
Store your Groq API key securely in a .env file. Ensure that the .env file is located in your project's working directory.
GROQ_API_KEY="your_groq_key"
The script begins by loading environment variables using Python’s dotenv library. This ensures sensitive information like API keys is securely managed. The GROQ_API_KEY is set up to authenticate requests with the ChatGroq API.
from dotenv import load_dotenv
import os
# Load environment variables from .env file
load_dotenv()
os.environ["GROQ_API_KEY"] = os.getenv("GROQ_API_KEY")
2. Defining the state structure
The state of the blog (data shared across steps) is defined as:
from typing import Annotated
from typing_extensions import TypedDict
from langchain_core.messages import AnyMessage
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list[AnyMessage], add_messages]
A state graph in LangGraph operates based on structured state transitions. Here:
- The State dictionary includes a messages key.
- The Annotated type enforces a structure where messages is a list of LangChain AnyMessage objects.
- In this context, add_messages is a type-level function used with Annotated to define additional metadata or behavior for the messages field in the State structure.
This state will be passed through different nodes in the graph, evolving as new content is added.
3. Initializing the Open-Source Large Language Model using Groq.
from langchain_groq import ChatGroq
model = ChatGroq(model="Gemma2-9b-It", temperature=0.7)
The ChatGroq model (Gemma2–9b-It) is initialized with:
- temperature=0.7: Controls the randomness of the model's output. Higher values make the text more creative, while lower values make it deterministic.
The model serves as the core for generating the blog title and content.
4. Creating the blog generation workflow
The make_blog_generation_graph function creates a state graph workflow using LangGraph's StateGraph. This graph defines the sequence of operations required to generate a blog.
def make_blog_generation_graph():
"""Create a blog generation agent"""
graph_workflow = StateGraph(State)
# Step 1: Generate title based on the topic
def generate_title(state):
prompt_1 = SystemMessage(content="As an experienced writer generate one blog title.")
return {"messages":[model.invoke([prompt_1] + state["messages"])]}
# Step 2: Generate content based on the title
def generate_content(state):
prompt_2 = SystemMessage(content="As an experienced content creator write a blog with 500 word limit in 4 paragraphs.")
return {"messages":[model.invoke([prompt_2] + state["messages"])]}
# Add nodes to the graph
graph_workflow.add_node("title_generation", generate_title)
graph_workflow.add_node("content_generation", generate_content)
# Define graph edges
graph_workflow.add_edge("title_generation", "content_generation")
graph_workflow.add_edge("content_generation", END)
graph_workflow.add_edge(START, "title_generation")
# Compile the graph into an executable agent
agent = graph_workflow.compile()
return agent
Let’s break down the make_blog_generation_graph function step by step.
Defining Workflow Nodes
The graph consists of two main nodes:
- Title Generation Node- It accepts the initial state (initialized with user input). Adds a system message instructing the model to generate a blog title. Updates the messages state with the model's response.
Note- The initial state variable is defined in the main function below.
def generate_title(state):
prompt_1 = SystemMessage(content="As an experienced writer generate one blog title.")
return {"messages": [model.invoke([prompt_1] + state["messages"])]}
- Content Generation Node- This takes the title generated in the previous step. Uses a system prompt instructing the model to write a blog (limited to 500 words and 4 paragraphs). Updates the state with the blog content.
def generate_content(state):
prompt_2 = SystemMessage(content="As an experienced content creator write a blog with 500 word limit in 4 paragraphs.")
return {"messages": [model.invoke([prompt_2] + state["messages"])]}
Adding Nodes and Defining Edges
graph_workflow.add_node("title_generation", generate_title)
graph_workflow.add_node("content_generation", generate_content)
graph_workflow.add_edge("title_generation", "content_generation")
graph_workflow.add_edge("content_generation", END)
graph_workflow.add_edge(START, "title_generation")
- The nodes (title_generation and content_generation) are added to the workflow.
- Edges are defined to control the flow:
Start -> Title Generation -> Content Generation -> End.
Compiling the Graph
The compile() method is used to transform the graph workflow into an executable agent.
agent = graph_workflow.compile()
We can visualize the agent graph in jupyter notebook using the code snippet given below.
from IPython.display import Image, display
#dislay the graph
display(Image(agent.get_graph().draw_mermaid_png()))
5. Running the blog generation workflow
if __name__ == "__main__":
# Initialize the blog generation agent
blog_agent = make_blog_generation_graph()
# initialized initial state
initial_state = State(
messages=[HumanMessage(content="Machine learning")]
)
response = blog_agent.invoke(initial_state)
for output in blog_agent.stream(initial_state):
for key, value in output.items():
print(f"Output from node: {key}")
print("------")
print(value['messages'][0].content)
print("\n------\n")
for message in response["messages"]:
print(message.content)
The main script initializes the graph and provides an initial state, which includes the input topic (e.g., “Machine learning”) as a HumanMessage.
The invoke method runs the agent, executing all the nodes in the graph.
The stream method allows real-time monitoring of node outputs, printing intermediate results (e.g., the title and blog paragraphs). Alternatively, you can iterate over the message key in the response dictionary, as the response typically contains a list of messages.
Example Output
Key Features of the Workflow
- Dynamic Workflow: The state graph allows fine-grained control over the content generation process.
- Reusability: Nodes can be reused or extended to handle additional steps (e.g., summary generation).
Conclusion
This code shows how to use LangGraph and ChatGroq to create a simple and flexible workflow for AI-powered content generation. Whether you’re automating blog writing or creating advanced chatbots, these tools make it easy to manage the workflow and connect language models to structured tasks.
Thank you for reading and happy coding!