Data Ingestion & Knowledge Sources
✅ Auto-Indexing – Points at files, indexes unstructured data automatically without manual setup
✅ Auto-Sync – Connected repositories sync automatically, document changes reflected almost instantly
File Formats – Supports PDF, DOCX, PPT, TXT and common enterprise formats
⚠️ Limited Scope – No website crawling or YouTube ingestion, narrower than CustomGPT
Enterprise Scale – Handles large corporate data sets, exact limits not published
File-Based Workflow – Drop PDFs, DOCX, PPTX, HTML into folder and embed via script
GUI Knowledge Editor – Add documents on-the-fly through basic interface
Manual Processing – ⚠️ No web crawler or automatic refresh capabilities
Local Storage – ✅ All data stays on your machine for air-gapped deployments
1,400+ file formats – PDF, DOCX, Excel, PowerPoint, Markdown, HTML + auto-extraction from ZIP/RAR/7Z archives
Website crawling – Sitemap indexing with configurable depth for help docs, FAQs, and public content
Multimedia transcription – AI Vision, OCR, YouTube/Vimeo/podcast speech-to-text built-in
Cloud integrations – Google Drive, SharePoint, OneDrive, Dropbox, Notion with auto-sync
Knowledge platforms – Zendesk, Freshdesk, HubSpot, Confluence, Shopify connectors
Massive scale – 60M words (Standard) / 300M words (Premium) per bot with no performance degradation
⚠️ Standalone Only – Own chat/search interface, not a "deploy everywhere" platform
⚠️ No External Channels – No Slack bot, Zapier connector, or public API
Web/Desktop UI – Users interact through Pyx's interface, minimal third-party chat synergy
Custom Integration – Deeper integrations require custom dev work or future updates
Local Gradio GUI – Python scripts for queries with no pre-built channels
No Native Integrations – ⚠️ No Slack, Teams, or website widgets out-of-box
Custom Wrappers Required – Build your own connectors to forward messages
Website embedding – Lightweight JS widget or iframe with customizable positioning
CMS plugins – WordPress, WIX, Webflow, Framer, SquareSpace native support
5,000+ app ecosystem – Zapier connects CRMs, marketing, e-commerce tools
MCP Server – Integrate with Claude Desktop, Cursor, ChatGPT, Windsurf
OpenAI SDK compatible – Drop-in replacement for OpenAI API endpoints
LiveChat + Slack – Native chat widgets with human handoff capabilities
Conversational Search – Context-aware Q&A over enterprise documents with follow-up questions
⚠️ Internal Focus – Designed for knowledge management, no lead capture or human handoff
Multi-Language – Likely supports multiple languages, though not a headline feature
⚠️ Basic Analytics – Stores chat history, fewer business insights than customer-facing tools
Open-Source RAG Bot – Runs on local LLMs with streaming responses
Single-Turn Q&A – ⚠️ Limited multi-turn conversation and long-term memory
Retrieval Tuning Module – Transparency layer showing answer construction process
Basic Interactions – No lead capture or human handoff features
✅ #1 accuracy – Median 5/5 in independent benchmarks, 10% lower hallucination than OpenAI
✅ Source citations – Every response includes clickable links to original documents
✅ 93% resolution rate – Handles queries autonomously, reducing human workload
✅ 92 languages – Native multilingual support without per-language config
✅ Lead capture – Built-in email collection, custom forms, real-time notifications
✅ Human handoff – Escalation with full conversation context preserved
⚠️ Minimal Branding – Logo/color tweaks only, designed as internal tool not white-label
⚠️ No Embedding – Standalone interface, no domain-embed or widget options available
Pyx UI Only – Look stays "Pyx AI" by design, public branding not supported
Security Focus – Emphasis on user management and access controls over theming
Plain Gradio Interface – Minimal theming with developer-focused design
Source Code Customization – Tweak code or build custom front-end for branding
Full white-labeling included – Colors, logos, CSS, custom domains at no extra cost
2-minute setup – No-code wizard with drag-and-drop interface
Persona customization – Control AI personality, tone, response style via pre-prompts
Visual theme editor – Real-time preview of branding changes
Domain allowlisting – Restrict embedding to approved sites only
⚠️ Undisclosed Model – Likely GPT-3.5/GPT-4 but exact model not publicly documented
⚠️ No Model Selection – Cannot switch LLMs or configure speed vs accuracy tradeoffs
⚠️ Single Configuration – Every query uses same model, no toggles or fine-tuning
Closed Architecture – Model details, context window, capabilities hidden from users intentionally
WizardVicuna-13B Default – Instruction-tuned open-source model included
Hugging Face Compatible – Swap any model with sufficient GPU resources
Full Local Control – ✅ No external APIs or cloud dependencies
Model Limitations – ⚠️ Smaller models won't match GPT-4 depth
GPT-5.1 models – Latest thinking models (Optimal & Smart variants)
GPT-4 series – GPT-4, GPT-4 Turbo, GPT-4o available
Claude 4.5 – Anthropic's Opus available for Enterprise
Auto model routing – Balances cost/performance automatically
Zero API key management – All models managed behind the scenes
Developer Experience ( A P I & S D Ks)
⚠️ No API – No open API or SDKs, everything through Pyx interface
⚠️ No Embedding – Cannot integrate into other apps or call programmatically
Closed Ecosystem – No GitHub examples, community plug-ins, or extensibility options
Turnkey Only – Great for ready-made tool, limits deep customization or extensions
Python Script Interface – No formal REST API or SDK
Subprocess Integration – Call scripts directly or build custom wrapper
Open Source Access – ✅ Full code access for modification
REST API – Full-featured for agents, projects, data ingestion, chat queries
Python SDK – Open-source customgpt-client with full API coverage
Postman collections – Pre-built requests for rapid prototyping
Webhooks – Real-time event notifications for conversations and leads
OpenAI compatible – Use existing OpenAI SDK code with minimal changes
Real-Time Answers – Serves accurate responses from internal documents, sparse public benchmarks
Auto-Sync Freshness – Connected repositories keep retrieval context always current automatically
⚠️ Limited Transparency – No anti-hallucination metrics or advanced re-ranking details published
Competitive RAG – Likely comparable to standard GPT-based systems on relevance control
Slower Inference – ⚠️ 3-10+ seconds per reply on single GPU
Decent Accuracy – Good when relevant docs found, struggles with complexity
FAISS Vector Search – Fast retrieval using Facebook's library
Sub-second responses – Optimized RAG with vector search and multi-layer caching
Benchmark-proven – 13% higher accuracy, 34% faster than OpenAI Assistants API
Anti-hallucination tech – Responses grounded only in your provided content
OpenGraph citations – Rich visual cards with titles, descriptions, images
99.9% uptime – Auto-scaling infrastructure handles traffic spikes
Customization & Flexibility ( Behavior & Knowledge)
✅ Auto-Sync Updates – Knowledge base updated without manual uploads or scheduling
⚠️ No Persona Controls – AI voice stays neutral, no tone or behavior customization
✅ Access Controls – Strong role-based permissions, admins set document visibility per user
Closed Environment – Great for content updates, limited for AI behavior or deployment
Deep Parameter Control – Tweak retrieval params, system prompts, knowledge weighting
Embedding Model Swap – Replace multilingual-e5-base with alternatives
Pipeline Modification – ✅ Full source access for custom logic
Live content updates – Add/remove content with automatic re-indexing
System prompts – Shape agent behavior and voice through instructions
Multi-agent support – Different bots for different teams
Smart defaults – No ML expertise required for custom behavior
Seat-Based Pricing – ~$30 per user per month, predictable monthly costs
✅ Cost-Effective Small Teams – Affordable for teams under 50 users
⚠️ Large Team Costs – 100 users = $3,000/month, can scale expensively
Unlimited Content – Document/token limits not published, gated only by user seats
Free Trial + Enterprise – Hands-on trial available, custom pricing for large deployments
MIT Licensed – ✅ Completely free with no subscription fees
Infrastructure Costs – Pay only for GPU hardware or cloud servers
Manual Scaling – ⚠️ Spin up and manage your own hardware
Standard: $99/mo – 60M words, 10 bots
Premium: $449/mo – 300M words, 100 bots
Auto-scaling – Managed cloud scales with demand
Flat rates – No per-query charges
✅ GDPR Compliance – Germany-based, implicit EU data protection and regional sovereignty
✅ Enterprise Privacy – Data isolated per customer, encrypted in transit and rest
✅ No Model Training – Customer data not used for external LLM training
✅ Role-Based Access – Built-in controls, admins set document visibility per role
⚠️ Limited Certifications – On-prem or SOC 2/ISO 27001/HIPAA not publicly documented
100% Local Execution – ✅ Perfect for sensitive data and air-gapped environments
No External Transmission – All processing stays on-premises
DIY Security – ⚠️ No built-in auth, you implement access controls
SOC 2 Type II + GDPR – Third-party audited compliance
Encryption – 256-bit AES at rest, SSL/TLS in transit
Access controls – RBAC, 2FA, SSO, domain allowlisting
Data isolation – Never trains on your data
Observability & Monitoring
Basic Stats – User activity, query counts, top-referenced documents for admins
⚠️ No Deep Analytics – No conversation analytics dashboards or real-time logging
Adoption Tracking – Useful for usage monitoring, lighter insights than full suites
Set-and-Forget – Minimal monitoring overhead, contact support for issues
Analysis Tab – Shows retrieved docs and query construction process
Console Logging – Basic logs printed to terminal
No Dashboard – ⚠️ Add your own monitoring for stats
Real-time dashboard – Query volumes, token usage, response times
Customer Intelligence – User behavior patterns, popular queries, knowledge gaps
Conversation analytics – Full transcripts, resolution rates, common questions
Export capabilities – API export to BI tools and data warehouses
✅ Direct Support – Email, phone, chat with hands-on onboarding approach
⚠️ No Open Community – Closed solution, no plug-ins or user-built extensions
Internal Roadmap – Product updates from Pyx only, no community marketplace
Quick Setup Focus – Emphasizes minimal admin overhead for internal knowledge search
Community-Driven – GitHub issues and lightweight documentation
Research Foundation – Academic paper (arXiv 2308.03983) on RCG approach
No Paid Support – ⚠️ No SLA or enterprise help desk
Comprehensive docs – Tutorials, cookbooks, API references
Email + in-app support – Under 24hr response time
Premium support – Dedicated account managers for Premium/Enterprise
Open-source SDK – Python SDK, Postman, GitHub examples
5,000+ Zapier apps – CRMs, e-commerce, marketing integrations
Additional Considerations
✅ No-Fuss Internal Search – Employees use without coding, simple deployment for teams
⚠️ Not Public-Facing – Not ideal for customer chatbots or developer-heavy customization
Siloed Environment – Single AI search environment, not broad extensible platform
Simpler Scope – Less flexible than CustomGPT, but faster setup for internal use
N/A
Time-to-value – 2-minute deployment vs weeks with DIY
Always current – Auto-updates to latest GPT models
Proven scale – 6,000+ organizations, millions of queries
Multi-LLM – OpenAI + Claude reduces vendor lock-in
No- Code Interface & Usability
✅ Straightforward UI – Users log in, ask questions, get answers without coding
✅ No-Code Admin – Admins connect data sources, Pyx indexes automatically
Minimal Customization – UI stays consistent and uncluttered by design
Internal Q&A Hub – Perfect for employee use, not external embedding or branding
N/A
2-minute deployment – Fastest time-to-value in the industry
Wizard interface – Step-by-step with visual previews
Drag-and-drop – Upload files, paste URLs, connect cloud storage
In-browser testing – Test before deploying to production
Zero learning curve – Productive on day one
Market Position – Turnkey internal knowledge search (Germany), not embeddable chatbot platform
Target Customers – Small-mid European teams needing GDPR compliance and simple deployment
Key Competitors – Glean, Guru, Notion AI; not customer-facing chatbots like CustomGPT
✅ Advantages – Simple scope, auto-sync, GDPR compliance, ~$30/user/month predictable pricing
⚠️ Use Case Fit – Perfect for <50 user teams, not API integrations or public chatbots
Market Position – MIT open-source local RAG for on-premises deployment
Target Customers – Developers experimenting locally, strict data isolation orgs
Key Competitors – LangChain, LlamaIndex, PrivateGPT, LocalGPT
Advantages – ✅ Free MIT license, 100% local, full model control
Best For – Offline environments, GPU infrastructure teams, zero cloud costs
Market position – Leading RAG platform balancing enterprise accuracy with no-code usability. Trusted by 6,000+ orgs including Adobe, MIT, Dropbox.
Key differentiators – #1 benchmarked accuracy • 1,400+ formats • Full white-labeling included • Flat-rate pricing
vs OpenAI – 10% lower hallucination, 13% higher accuracy, 34% faster
vs Botsonic/Chatbase – More file formats, source citations, no hidden costs
vs LangChain – Production-ready in 2 min vs weeks of development
⚠️ Undisclosed LLM – Likely GPT-3.5/GPT-4 but model details not publicly documented
⚠️ No Model Selection – Cannot switch LLMs or choose speed vs accuracy configurations
⚠️ Opaque Architecture – Context window size and capabilities not exposed to users
Simplicity Focus – Hides technical complexity, users ask questions and get answers
⚠️ No Fine-Tuning – Cannot customize model on domain data for specialized responses
WizardVicuna-13B – Default uncensored instruction-tuned model
Any Hugging Face Model – Llama 2, Falcon, Mistral with GPU capacity
No Vendor Lock-In – ✅ Complete flexibility without API limits
Performance Trade-Off – ⚠️ Open models slower than managed cloud APIs
OpenAI – GPT-5.1 (Optimal/Smart), GPT-4 series
Anthropic – Claude 4.5 Opus/Sonnet (Enterprise)
Auto-routing – Intelligent model selection for cost/performance
Managed – No API keys or fine-tuning required
Conversational RAG – Context-aware search over enterprise documents with follow-up support
✅ Auto-Sync – Repositories sync automatically, changes reflected almost instantly
Document Formats – PDF, DOCX, PPT, TXT and common enterprise formats supported
⚠️ No Advanced Controls – Chunking, embedding models, similarity thresholds not exposed
⚠️ Limited Transparency – No citation metrics or anti-hallucination details published
Closed System – Optimized for internal Q&A, limited visibility into retrieval architecture
Retrieval-Centric Generation – Research-backed approach separating LLM from knowledge memorization
Mixtures-of-Knowledge-Bases – Multiple knowledge bases with intelligent routing
Explicit Prompt-Weighting – Control retrieved content influence on answers
Retrieval Transparency – ✅ Visual debugging showing document selection
FAISS Search – Fast approximate nearest neighbor retrieval
GPT-4 + RAG – Outperforms OpenAI in independent benchmarks
Anti-hallucination – Responses grounded in your content only
Automatic citations – Clickable source links in every response
Sub-second latency – Optimized vector search and caching
Scale to 300M words – No performance degradation at scale
✅ Internal Knowledge Search – Employees asking questions about company documents and policies
✅ Team Onboarding – New hires finding information without bothering colleagues
✅ Policy Lookup – HR, compliance, operational procedure retrieval for staff
✅ Small European Teams – GDPR-compliant internal search with EU data residency
⚠️ NOT SUITABLE FOR – Public chatbots, customer support, API integrations, multi-channel deployment
Air-Gapped Environments – ✅ Defense, classified research requiring offline operation
Healthcare PHI Compliance – HIPAA organizations needing 100% data isolation
RAG Research – Developers learning internals with full transparency
Zero-Cost RAG – Teams with GPU infrastructure avoiding subscriptions
Data Sovereignty – Strict data residency preventing cloud processing
Customer support – 24/7 AI handling common queries with citations
Internal knowledge – HR policies, onboarding, technical docs
Sales enablement – Product info, lead qualification, education
Documentation – Help centers, FAQs with auto-crawling
E-commerce – Product recommendations, order assistance
✅ GDPR Compliance – Germany-based with implicit EU data protection compliance
✅ German Data Residency – EU storage location for regional data sovereignty requirements
✅ Enterprise Privacy – Customer data isolated, encrypted in transit and at rest
✅ Role-Based Access – Built-in controls, admins set document visibility per user
⚠️ Limited Certifications – SOC 2, ISO 27001, HIPAA not publicly documented
Complete Data Isolation – ✅ Ideal for classified, PHI, PII data
No Third-Party APIs – Zero external calls to cloud providers
Open-Source Auditing – Full code transparency for security reviews
Self-Managed Security – ⚠️ You control all security layers
SOC 2 Type II + GDPR – Regular third-party audits, full EU compliance
256-bit AES encryption – Data at rest; SSL/TLS in transit
SSO + 2FA + RBAC – Enterprise access controls with role-based permissions
Data isolation – Never trains on customer data
Domain allowlisting – Restrict chatbot to approved domains
Seat-Based Pricing – ~$30 per user per month
✅ Small Team Value – Affordable for teams under 50 users, predictable costs
⚠️ Scalability Cost – 100 users = $3,000/month, expensive for large organizations
Unlimited Content – No published document limits, gated only by user seats
Free Trial + Enterprise – Evaluation available, custom pricing for volume discounts
MIT License – ✅ Free with no subscription or API charges
GPU Costs Only – Hardware or cloud compute are sole expenses
Unlimited Queries – No per-request pricing or rate limits
Standard: $99/mo – 10 chatbots, 60M words, 5K items/bot
Premium: $449/mo – 100 chatbots, 300M words, 20K items/bot
Enterprise: Custom – SSO, dedicated support, custom SLAs
7-day free trial – Full Standard access, no charges
Flat-rate pricing – No per-query charges, no hidden costs
✅ Direct Support – Email, phone, chat with hands-on onboarding approach
✅ Quick Deployment – Minimal admin overhead, connect sources and start asking questions
⚠️ No Open Community – Closed solution, no plug-ins or user extensions
⚠️ No Developer Docs – No API documentation or programmatic access guides
Internal Roadmap – Updates from Pyx only, no user-contributed features
GitHub Repository – Code, docs, and examples at RCGAI/SimplyRetrieve
Academic Paper – arXiv 2308.03983 explaining RCG architecture
Community Support – GitHub Issues for troubleshooting
No Paid Support – ⚠️ Community-driven only, no SLAs
Documentation hub – Docs, tutorials, API references
Support channels – Email, in-app chat, dedicated managers (Premium+)
Open-source – Python SDK, Postman, GitHub examples
Community – User community + 5,000 Zapier integrations
Limitations & Considerations
⚠️ No Public API – Cannot embed or call programmatically, standalone UI only
⚠️ No Messaging Integrations – No Slack, Teams, WhatsApp or chat platform connectors
⚠️ Limited Branding – Minimal customization, not white-label solution for public deployment
⚠️ No Advanced Controls – Cannot configure RAG parameters, model selection, retrieval strategies
⚠️ Seat-Based Scaling – Expensive for large orgs vs usage-based pricing models
✅ Best For – Small European teams (<50 users) prioritizing simplicity and GDPR over flexibility
Developer-Only Tool – ⚠️ Requires Python, GPU, and technical expertise
GPU Infrastructure Required – ⚠️ Dedicated hardware or cloud GPU needed
Basic UI – Gradio interface needs custom front-end for production
Manual Scaling – ⚠️ No auto-scaling, you manage load balancing
No Enterprise Features – Missing multi-tenancy, user management, analytics
Slower Inference – ⚠️ 3-10+ seconds vs sub-second cloud APIs
Managed service – Less control over RAG pipeline vs build-your-own
Model selection – OpenAI + Anthropic only; no Cohere, AI21, open-source
Real-time data – Requires re-indexing; not ideal for live inventory/prices
Enterprise features – Custom SSO only on Enterprise plan
⚠️ NO Agent Capabilities – No autonomous agents, tool calling, or multi-agent orchestration
Conversational Search Only – Context-aware dialogue for Q&A, not agentic or autonomous behavior
Basic RAG Architecture – Standard retrieval without function calling, tool use, or workflows
⚠️ No External Actions – Cannot invoke APIs, execute code, query databases, or interact externally
Internal Knowledge Focus – Employee Q&A about documents, not task automation or workflows
Retrieval-Centric Generation – Research approach separating reasoning from knowledge
Retrieval Tuning Module – ✅ Developer transparency showing document selection
Knowledge Base Mixing – Route queries across multiple sources
Single-Turn Focus – ⚠️ Limited multi-turn conversation memory
No Chatbot UI – ⚠️ Gradio for developers only
No Production Features – ⚠️ No lead capture, handoff, or multi-channel support
Custom AI Agents – Autonomous GPT-4/Claude agents for business tasks
Multi-Agent Systems – Specialized agents for support, sales, knowledge
Memory & Context – Persistent conversation history across sessions
Tool Integration – Webhooks + 5,000 Zapier apps for automation
Continuous Learning – Auto re-indexing without manual retraining
R A G-as-a- Service Assessment
⚠️ NOT TRUE RAG-AS-A-SERVICE – Standalone internal app, not API-accessible RAG platform
Turnkey Application – Self-contained Q&A tool vs developer-accessible RAG infrastructure
⚠️ No API Access – No REST API, SDKs, programmatic access unlike CustomGPT/Vectara
Closed Application – Web/desktop interface only, cannot build custom applications on top
SaaS vs RaaS – Software-as-a-Service (standalone app) NOT Retrieval-as-a-Service (API infrastructure)
Best Comparison Category – Internal search tools (Glean, Guru), not developer RAG platforms
NOT RAG-AS-A-SERVICE – Open-source research project for local experimentation
Academic Foundation – Published research tool from RCGAI (arXiv 2308.03983)
Self-Hosted Only – ⚠️ No managed infrastructure, APIs, or SLAs
Developer-First Design – Python with GPU infrastructure requirements
100% Local Execution – ✅ Perfect for air-gapped and classified environments
No Service Features – ⚠️ No auth, multi-tenancy, analytics, or SaaS conveniences
Platform type – TRUE RAG-AS-A-SERVICE with managed infrastructure
API-first – REST API, Python SDK, OpenAI compatibility, MCP Server
No-code option – 2-minute wizard deployment for non-developers
Hybrid positioning – Serves both dev teams (APIs) and business users (no-code)
Enterprise ready – SOC 2 Type II, GDPR, WCAG 2.0, flat-rate pricing
Join the Discussion
Loading comments...