RudraDB-Opin Documentation
The complete guide to building relationship-aware AI applications with the world's first free relationship-aware vector database.
📋 Table of Contents
🚀 Installation & Setup
Zero Configuration Required
RudraDB-Opin works with any ML model automatically. No dimension specification, no complex setup - just install and start building!
Install via pip
pip install rudradb-opin
Verify Installation
import rudradb
import numpy as np
print(f"🧬 RudraDB-Opin {rudradb.__version__}")
print(f"📊 Capacity: {rudradb.MAX_VECTORS} vectors, {rudradb.MAX_RELATIONSHIPS} relationships")
# Test basic functionality
db = rudradb.RudraDB()
print("✅ Installation successful!")
Requirements
- Python 3.8+
- NumPy >= 1.20.0
- Works on Windows, macOS, Linux
🤖 Revolutionary Auto Features
What makes RudraDB-Opin truly revolutionary is its auto-intelligence - features that eliminate manual configuration and build intelligent connections automatically.
🎯 Auto-Dimension Detection
# No dimension specification needed!
db = rudradb.RudraDB() # Auto-detects from first embedding
# OpenAI embeddings (1536D) - Auto-detected
openai_embedding = get_openai_embedding("text")
db.add_vector("openai_doc", openai_embedding)
print(f"Auto-detected: {db.dimension()}D") # 1536
# Switch to Sentence Transformers (384D)
db2 = rudradb.RudraDB() # Fresh auto-detection
st_embedding = sentence_transformer.encode("text")
db2.add_vector("st_doc", st_embedding)
print(f"Auto-detected: {db2.dimension()}D") # 384
🧠Auto-Relationship Detection
# Just add documents with rich metadata
db.add_vector("ai_intro", embedding, {
"category": "AI",
"difficulty": "beginner",
"tags": ["ai", "introduction"],
"type": "concept"
})
db.add_vector("ml_advanced", embedding, {
"category": "AI",
"difficulty": "advanced",
"tags": ["ml", "algorithms"],
"type": "concept"
})
# Auto-creates relationships:
# - Semantic (same category)
# - Temporal (beginner → advanced)
# - Associative (shared tags)
🧠Core Concepts
Vectors
Vectors in RudraDB-Opin store embeddings with rich metadata, enabling both similarity search and intelligent relationship building.
import numpy as np
# Basic vector addition
embedding = np.random.rand(384).astype(np.float32)
db.add_vector("doc_1", embedding, {
"title": "Machine Learning Basics",
"author": "Dr. Smith",
"category": "education",
"tags": ["ml", "tutorial"],
"difficulty": "beginner"
})
Relationships
RudraDB-Opin supports 5 relationship types, each optimized for different connection patterns:
Type | Use Case | Example |
---|---|---|
semantic |
Content similarity, topical connections | Related articles, similar products |
hierarchical |
Parent-child structures, categorization | Knowledge trees, org charts |
temporal |
Sequential content, time-based flow | Course progression, workflows |
causal |
Cause-effect, problem-solution pairs | Troubleshooting, Q&A systems |
associative |
General associations, loose connections | Recommendations, cross-references |
Relationship-Aware Search
# Traditional similarity search
results = db.search(query_embedding, rudradb.SearchParams(
top_k=5,
include_relationships=False
))
# Relationship-aware search
enhanced_results = db.search(query_embedding, rudradb.SearchParams(
top_k=10,
include_relationships=True, # 🔥 Enable relationship intelligence
max_hops=2, # Multi-hop discovery
relationship_weight=0.3 # Balance similarity + relationships
))
# Discovers 2x more relevant results!
Multi-Hop Discovery
Find connections through relationship chains - documents that are indirectly related through multiple steps.
# Find documents connected through relationship chains
connected = db.get_connected_vectors("starting_doc", max_hops=2)
for vector, hop_count in connected:
connection_type = "Direct" if hop_count == 0 else f"{hop_count}-hop"
print(f"📄 {vector['id']}: {connection_type} connection")
# Example chain: A →(semantic)→ B →(causal)→ C
📚 API Reference
RudraDB Class
Constructor
rudradb.RudraDB(dimension=None, config=None)
dimension
: Optional[int] - Embedding dimension (auto-detected if None)config
: Optional[dict] - Configuration options
Properties
Method | Returns | Description |
---|---|---|
dimension() |
int | Current embedding dimension |
vector_count() |
int | Number of vectors stored |
relationship_count() |
int | Number of relationships stored |
is_empty() |
bool | True if no vectors or relationships |
SearchParams Class
params = rudradb.SearchParams(
top_k=10, # Number of results
include_relationships=True, # Enable relationship search
max_hops=2, # Maximum relationship hops
similarity_threshold=0.1, # Minimum similarity score
relationship_weight=0.3, # Relationship influence (0.0-1.0)
relationship_types=["semantic"] # Filter relationship types
)
🔌 ML Framework Integrations
OpenAI Integration
import openai
import rudradb
import numpy as np
# Auto-detects OpenAI's 1536D embeddings
db = rudradb.RudraDB()
# Add document with OpenAI embedding
response = openai.Embedding.create(
model="text-embedding-ada-002",
input="Your document text here"
)
embedding = np.array(response['data'][0]['embedding'], dtype=np.float32)
db.add_vector("doc1", embedding, {"source": "openai"})
print(f"Auto-detected dimension: {db.dimension()}D") # 1536
HuggingFace Integration
from sentence_transformers import SentenceTransformer
import rudradb
# Auto-detects any model's dimensions
model = SentenceTransformer('all-MiniLM-L6-v2') # 384D
db = rudradb.RudraDB()
# Batch processing
texts = ["Document 1", "Document 2", "Document 3"]
embeddings = model.encode(texts)
for i, (text, embedding) in enumerate(zip(texts, embeddings)):
db.add_vector(f"doc_{i}", embedding.astype(np.float32), {"text": text})
print(f"Auto-detected dimension: {db.dimension()}D") # 384
LangChain Integration
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.text_splitter import CharacterTextSplitter
import rudradb
# Setup
embeddings = HuggingFaceEmbeddings()
text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=50)
db = rudradb.RudraDB() # Auto-detects embedding dimensions
# Process documents
documents = [...] # Your documents
chunks = text_splitter.split_documents(documents)
# Add with relationship building
for i, chunk in enumerate(chunks):
embedding = embeddings.embed_query(chunk.page_content)
db.add_vector(f"chunk_{i}", np.array(embedding, dtype=np.float32), {
"content": chunk.page_content,
"source": chunk.metadata.get("source")
})
# Enhanced RAG search
def rag_search(query):
query_embedding = embeddings.embed_query(query)
results = db.search(np.array(query_embedding, dtype=np.float32),
rudradb.SearchParams(include_relationships=True))
return results
🚀 Advanced Topics
Capacity Management
RudraDB-Opin is designed with specific limits perfect for learning and prototyping:
Opin Specifications
- 100 vectors - Perfect tutorial size
- 500 relationships - Rich relationship modeling
- 2-hop traversal - Multi-degree discovery
- 5 relationship types - Complete feature set
stats = db.get_statistics()
usage = stats['capacity_usage']
print(f"Vectors: {stats['vector_count']}/{rudradb.MAX_VECTORS}")
print(f"Relationships: {stats['relationship_count']}/{rudradb.MAX_RELATIONSHIPS}")
print(f"Vector usage: {usage['vector_usage_percent']:.1f}%")
# Upgrade recommendation
if usage['vector_usage_percent'] > 80:
print("💡 Consider upgrading to full RudraDB for production scale!")
Performance Optimization
- Use
np.float32
for embeddings (memory efficiency) - Keep metadata reasonably sized
- Use appropriate
top_k
values (5-20 typical) - Set
similarity_threshold
to filter noise - Limit
max_hops
for faster search
Upgrade to Production
When you've mastered relationship-aware search with RudraDB-Opin, upgrade seamlessly:
# 1. Export your data
data = db.export_data()
with open('my_data.json', 'w') as f:
json.dump(data, f)
# 2. Upgrade package
# pip uninstall rudradb-opin
# pip install rudradb
# 3. Import to production scale
import rudradb # Now the full version!
new_db = rudradb.RudraDB() # 100,000+ vector capacity
new_db.import_data(data) # Same API, unlimited scale!