Skip to content
Eliza

AI AGENTS • WEB3 • OPEN SOURCE

Eliza

The core setup for your Bitcoin-focused editorial app

A New Flow for Bitcoin Editorials

To fully leverage Eliza as the core setup for your Bitcoin-focused editorial app, we’ve reimagined the flow to integrate Eliza’s capabilities effectively. Eliza is an open-source framework for building AI agents with Web3 integrations, built on TypeScript with a modular architecture. This comprehensive flow uses Eliza as the central hub, incorporating content generation, custom LLM training, voice chat functionality, and dynamic updates tailored to Bitcoin.

Core Flow Overview

Setup

Install Eliza and define a Bitcoin expert character to ensure consistent, expert-level responses.

Data

Create custom providers to pull Bitcoin data like price feeds, blockchain data, and news sentiment.

Content

Integrate a fine-tuned LLM to generate market analyses, articles, and trend reports.

Training

Use Replicate to automate LLM training on new content, keeping the model up-to-date.

Voice Chat

Leverage Discord and speech APIs for voice interactions with the AI agent.

Dynamic

Use Eliza’s memory to analyze trends and dynamically update prompts for relevance.

Implementation Details

Step 1: Setup

Install Eliza: npm install @elizaos/core (adjust based on official docs). Then, create a character.json file to define the AI personality.

{
  "name": "BitcoinExpert",
  "bio": "A seasoned analyst with 10+ years in Bitcoin.",
  "knowledge": "Bitcoin history, blockchain tech, market trends",
  "style": "Professional, clear, engaging"
}

Step 2: Data Collection

Create a custom provider in TypeScript to fetch data from sources like CoinGecko. Build a custom Bitcoin plugin to interact with the blockchain.

import { Provider } from '@elizaos/core';

export class BitcoinPriceProvider extends Provider {
  async fetch() {
    const response = await fetch('https://api.coingecko.com/api/v3/coins/bitcoin');
    return response.json();
  }
}

Step 3: Content Generation

Fine-tune a Hugging Face model on Bitcoin data, deploy it on Replicate, and call it via an API to generate content.

async function generateContent(data) {
  const response = await fetch('https://api.replicate.com/v1/predictions', {
    method: 'POST',
    body: JSON.stringify({ input: data, model: 'your-model-id' })
  });
  return response.json();
}

Step 4: Voice Chat

Use discord.js to handle voice channels. Integrate AssemblyAI for speech-to-text and Google Cloud for text-to-speech.

Step 5: LLM Training & Updates

Use the Replicate API to programmatically start new training jobs with your latest content.

async function trainLLM(content) {
  await fetch('https://api.replicate.com/v1/trainings', {
    method: 'POST',
    body: JSON.stringify({ input: content, model: 'custom-bitcoin-llm' })
  });
}

Knowledge System Integration

Retrieval-Augmented Generation (RAG)

ElizaOS can be extended with a powerful Knowledge System using the @elizaos/plugin-knowledge plugin. This enables Retrieval-Augmented Generation (RAG), allowing your agent to ingest documents and answer questions based on that knowledge. It supports a wide range of file types, including PDF, Markdown, text files, and even source code.

Critical Plugin Order

For the knowledge plugin to work correctly, plugins must be loaded in a specific order in your character configuration file. The SQL plugin must come first, followed by an embeddings provider like OpenAI, and finally the knowledge plugin.

export const character: Character = {
  name: 'MyAgent',
  plugins: [
    '@elizaos/plugin-sql',       // 1. REQUIRED: Provides the database
    '@elizaos/plugin-openai',    // 2. REQUIRED: Provides embeddings
    '@elizaos/plugin-knowledge', // 3. Must be after dependencies
    // ... other plugins
  ],
};

According to the official documentation, once configured, you can simply create a knowledge folder in your project root and fill it with your documents. The agent will automatically load, process, and embed them on startup.

NFT Collections Plugin

Comprehensive NFT Market Data

ElizaOS also features a powerful plugin for interacting with NFT collections, designed to provide market data, social analytics, and trading capabilities. It was originally built to integrate with services like Reservoir and CoinGecko.

Refactoring for OpenSea API

As you noted, Reservoir has been acquired by OpenSea. Therefore, this plugin requires a refactor to integrate directly with the official OpenSea API. This migration is crucial for maintaining functionality and accessing OpenSea’s comprehensive NFT dataset. The necessary changes would include updating API endpoints, adapting data models to match OpenSea’s schemas, and implementing their specific authentication and rate-limiting strategies.

You can read about the original plugin’s features in the official documentation.

Thirdweb Nebula Plugin

Natural Language On-Chain Interaction

A core focus of this setup is the ai16z/plugin-thirdweb, which integrates with Thirdweb’s Nebula AI. This allows the agent to interact with on-chain data using natural language. You can ask it to analyze smart contracts, read wallet balances, look up token information, and much more.

Configuration and Usage

To enable the plugin, you simply need to add your Thirdweb secret key to your .env file. Once configured, you can send natural language queries to the agent to interact with the blockchain.

THIRDWEB_SECRET_KEY=your-thirdweb-secret-key-here

For example, you could ask: “What is the total NFT supply for 0xBC4CA0EdA7647A8aB7C2061c2E118A18a936f13D?” or “Does 0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045 hold USDC on Base?”. More details can be found in the official documentation.

AI Video Generation with Luma

Text-to-Video with Luma AI

To incorporate cutting-edge video content, the @elizaos/plugin-video-generation can be used. This plugin integrates with Luma AI’s Dream Machine, allowing the agent to generate high-quality videos directly from text prompts.

Configuration

Enabling this functionality is straightforward. Simply add your Luma AI API key to the .env file.

LUMA_API_KEY=your_luma_api_key

Once configured, you can prompt the agent with commands like, “Generate a video of a sunset on the beach.” The plugin handles the API request and progress monitoring. You can find more details in the official documentation.

Connecting to External Services

Platform Integration via Services

Eliza uses a modular system of Services to connect the AI agent to various external platforms. Each service acts as a bridge, enabling the agent to communicate and operate on platforms like Discord, Twitter, Telegram, and more, while maintaining a consistent core behavior.

Focus on Slack Integration

For this project, the primary objective is to leverage Eliza within our Slack workspace. The Slack service plugin will allow the agent to participate in channels, analyze conversations, and integrate with our internal workflows and tools, turning it into a powerful assistant for team collaboration and process automation.

The setup guide and manifest for the Slack app are available in the client-slack GitHub repository. You can also learn more about the different services and their configurations in the official documentation.

Structuring the Slack Environment with Worlds

Mapping the Workspace to a World

To manage the integration, Eliza uses a concept called a “World.” A World is a container for a complete environment, including all its users and channels. For our purposes, our entire Slack workspace will be mapped to a single Eliza World.

Slack Channels as Eliza Rooms

Within this World, each Slack channel (e.g., #general, #design) is treated as a “Room.” This is the core concept that allows the agent to distinguish between different conversations. Each Room stores its own messages and participants, enabling the agent to maintain context across multiple channels. The channelId from Slack is mapped to the Room’s properties, and its type is set to GROUP to reflect its nature as a multi-participant channel.

// Example of mapping a Slack Workspace and Channel
// 1. The workspace is mapped to a World
await runtime.ensureWorldExists({
  name: 'OurCompany Workspace',
  agentId: runtime.agentId,
  serverId: 'T12345ABCDE', // Your Slack Workspace ID
});

// 2. Each channel is mapped to a Room within that World
await runtime.ensureRoomExists({
  name: '#general',
  source: 'slack',
  type: 'GROUP', // From ChannelType enum
  channelId: 'C024BE91L', // The specific Slack Channel ID
  serverId: 'T12345ABCDE',
});

This architecture ensures that the agent can operate effectively across our entire Slack environment. For more details, see the official documentation on Worlds and Rooms.

Defining the Agent's Personality

Leveraging Hume AI Prompts in Eliza

A key part of creating a compelling agent is defining its personality. We can leverage our existing prompt design work from our Hume AI voice agent to shape the Eliza agent’s character. This is done through the Character file, which acts as the heart of the agent’s personality.

System Prompt and Message Examples

The core personality is set using the system prompt, where we can establish the agent’s tone, style, and core directives. To further refine its conversational patterns, we will populate the messageExamples with dialogues that reflect the desired interaction style from our Hume AI prompts. This ensures the agent’s responses are not only accurate but also aligned with our brand’s voice.

Connecting Dialogue to Actions

Beyond defining conversational style, the messageExamples are critical for teaching the agent which Actions to perform. An Action is a specific capability, like sending crypto or generating an image. By including an actions array in an example response, we train the agent to trigger that specific function when a similar user request is made. This is how we connect natural language commands to the agent’s functional tools, giving it the ability to do things rather than just talk about them.

// Example Character File
export const ourAgentCharacter: Character = {
  name: 'OurAgent',
  system: 'You are a helpful and professional AI assistant with a friendly and approachable tone. You are an expert in Bitcoin and Web3.',
  messageExamples: [
    [
      { name: '{{user}}', content: { text: 'What is a Bitcoin ETF?' } },
      { name: 'OurAgent', content: { text: 'Great question! A Bitcoin ETF is...' } }
    ],
    [
      { name: '{{user}}', content: { text: 'Can you send 1 BTC to my friend?' } },
      { 
        name: 'OurAgent', 
        content: { 
          text: 'Of course, I can help with that. What is the recipient’s address?',
          actions: ['SEND_BTC'] // <-- Triggers the SEND_BTC action
        } 
      }
    ]
  ],
  // ... other plugins and settings
};

By porting our proven prompt architecture, we can ensure a consistent and high-quality user experience. You can read more in the official documentation on Character files and see how they connect to agent capabilities in the documentation on Actions.

To help create and manage these complex character files, the community has built the Eliza Character Generator, a helpful tool for scaffolding and exporting character configurations.

Agent Self-Improvement with Evaluators

Cognitive Processing and Memory

Eliza agents can learn and evolve through a powerful feature called Evaluators. These are cognitive components that run in the background after a conversation, allowing the agent to reflect on what was said. This process is similar to how humans form episodic memories.

Fact and Reflection System

The system includes a **Fact Evaluator** to extract key information (facts, statuses, and opinions) and a **Reflection Evaluator** for self-assessment on conversational performance and relationship tracking. This process allows the agent to build a persistent knowledge base about users and topics, improving its context-awareness over time.

// After a conversation, an evaluator might extract:
[
  {
    "claim": "User works at a tech startup",
    "type": "fact"
  },
  {
    "claim": "User enjoys their new job",
    "type": "opinion"
  }
]

This self-improvement loop is crucial for creating a truly intelligent and adaptive agent. You can learn more from the official documentation on Evaluators.

Database and Persistence with Supabase

Data Persistence with a Supabase Adapter

For the agent’s memory and data persistence, we will use Supabase. Eliza’s database system is built on a flexible adapter pattern, and the community has provided an @elizaos-plugins/adapter-supabase that integrates directly with Supabase’s PostgreSQL backend.

Configuration and Setup

This approach gives us a scalable, managed database with real-time capabilities without having to manage our own PostgreSQL instance. The setup involves adding the Supabase URL and anonymous key to the agent’s character configuration and running the provided SQL schema to prepare the database.

// Add to character configuration
"plugins": ["@elizaos-plugins/adapter-supabase"],
"settings": {
  "secrets": {
    "SUPABASE_URL": "your-supabase-project-url",
    "SUPABASE_ANON_KEY": "your-supabase-anon-key"
  }
}

The full setup instructions and database schema can be found in the official adapter repository.

Automated Engineering with Devin

AI-Powered Software Engineering

With the recent price reduction of Devin 2.0, integrating their AI software engineer has become a highly viable strategy. The @elizaos/plugin-devin allows our Eliza agent to delegate complex software engineering tasks to a specialized AI. This can be used for everything from writing new code to refactoring existing components.

Starting a Devin Session

To enable this, we need to add the DEVIN_API_TOKEN to our environment variables. Once configured, we can use the START_DEVIN_SESSION action to initiate an engineering task. The agent can pass a natural language prompt to Devin, which will then begin working on the task in a managed session.

// To start a Devin session via an action:
await runtime.runAction("START_DEVIN_SESSION", {
    content: { text: "Refactor the NFT collections plugin to use the OpenSea API instead of Reservoir." }
});

This integration dramatically expands our agent’s capabilities, allowing it to autonomously manage its own development and maintenance. You can find the plugin and more details in the official plugin repository.

Deployment with Fleek

Secure and Scalable Hosting

For deployment, we will use Fleek, a platform designed for hosting autonomous AI agents. Fleek offers one-click deployments for Eliza agents, which simplifies the process of launching our agent into a production environment.

Why Fleek?

Fleek is the ideal choice because it provides a globally-distributed, secure, and privacy-preserving environment by leveraging Trusted Execution Environments (TEEs). This ensures that our agent’s operations are verifiable and protected. Their infrastructure is built to handle scaling effortlessly, allowing us to focus on the agent’s personality and capabilities rather than deployment complexities.

You can learn more about deploying Eliza agents on Fleek’s documentation.

Architectural Summary

This document outlines the architecture for a sophisticated, Bitcoin-focused AI agent built on the modular ElizaOS framework. The agent’s core personality is defined by our proven Hume AI prompts, ensuring a consistent and high-quality conversational experience.

Its capabilities are extended through a series of powerful plugins. A Knowledge System (RAG) allows it to ingest and learn from our internal documents, while Evaluators enable it to reflect on conversations and improve over time. Persistence and long-term memory are handled by a scalable Supabase backend.

The agent can perform a wide range of Actions: from on-chain analysis via the Thirdweb plugin to AI video generation with Luma. Critically, with the new affordability of Devin 2.0, we will integrate the Devin plugin to give the agent autonomous software engineering capabilities, allowing it to refactor its own code.

It will be integrated directly into our Slack workspace, which is mapped to an Eliza “World,” with each channel functioning as a “Room” for contextual interactions. The entire system will be deployed on Fleek, a platform designed for secure and scalable hosting of autonomous AI agents.

This architecture creates a powerful, autonomous editorial assistant that can not only generate content but also learn, perform complex tasks, and ultimately evolve its own functionality.