Quickstart
Start building awesome documentation in under 5 minutes
Introduction
This simple walkthrough will help you get started with building powerful applications using SuperAgentX in just a few steps.
1. Installation: How to install SuperAgentX on your system.
2. Create an Application: Set up a new SuperAgentX project with the basic structure.
3. Virtual Environment Setup: How to create and configure a Python virtual environment for your project.
4. Building a Use Case: Customizing the pipe.py file to fit your application’s needs.
5. Run the Application: Finally, how to execute your new project with a simple command.
Note:
After installation, the superagentx-cli
will be available in the path ~/.local/bin
on linux
. You may need to add
this to your environment’s path variable.
Create an Application
To create a new superagentx
project, run the following command in your terminal. This will prompt you to create
project with the basic structure set up for your superagentx
.
Follow the steps
After your inputs the superagentx-cli
create a application like a below structure.
Virtual Environment Setup
Now, you’ll need to create a Python virtual environment for your project:
Build Your Use Case
If you’re building a new use case, You need to modify the file at /home/ben/Content Creator/content_creator/pipe.py
.
The path in this file will be generated based on the input you provide through the SuperAgentX application. In that
file you can do your changes.
Step-by-Step process:
1. Import Package
This method imports various components from the superagentx library.
2. Define Method
This method get_content_creator_pipe
initializes a content creation pipeline by configuring an LLM
(Large Language Model) client using OpenAI’s API, for more detail about LLMClient
refer here.
3. Memory
This line enables memory
functionality in the pipeline by configuring a Memory
object that created llm_client
for storing and retrieving, for more detail about Memory
refer here.
4. Handler
This block initializes AIHandler which is configured with the llm_client to handle AI-driven operations, for more details refer here.
5. PromptTemplate
This line creates an instance of PromptTemplate, which is used to define and manage prompt structures for the language model, for more detail refer here.
6. Engine
This line initializes an Engine
object for the handlers
, combining it with the llm_client
and prompt_template
to process tasks with the specified handler and prompt structure, for more details refer here.
7. Agent
This line creates an Agent
with the role and goal to generate a list of URLs, and configuring it with the llm_client
,
prompt_template
, and engine
to perform, for more details about engine
refer here.
8. AgentXPipe
This line sets up an AgentXPipe, combining multiple agents (serper_agent, crawler_agent, and ai_agent) with shared memory to enable coordinated processing and communication between the agents, for more details refer here.
All Together
Run the Application
Once everything is set up, you can run your application with this command:
This will start your SuperAgentX application and execute the tasks you’ve configured.
Run the Application with Verbose
This is applicable for wspipe.py
and restpipe.py
. Even with your custom implementation.
Valid options are 1
, True
, true
and TRUE