What specific LLMs can Refact.ai integrate with for code generation and chat?
Refact.ai supports integration with a variety of large language models, including Claude 4, GPT 4.1, GPT 4o, and Gemini 2.5 Pro. This allows users to choose the best model for their specific coding tasks and preferences.
How does Refact.ai's autocompletion feature achieve its accuracy and context awareness?
The autocompletion is powered by the Qwen2.5-Coder model and utilizes Retrieval-augmented generation (RAG). It analyzes every symbol typed, retrieves project-specific insights from the codebase, and generates highly accurate suggestions for the next lines, functions, or classes.
What are the key differences in features between the Free and Pro plans for Refact.ai?
The Free plan offers 2,000 coins for AI Agent and Chat usage, in-IDE chat, unlimited fast autocompletion, and a codebase-aware vector database. The Pro plan includes everything in Free but provides 10,000 coins renewed monthly, along with 'Thinking abilities' for the AI Agent.
Can Refact.ai be deployed within a company's private infrastructure, and what benefits does this offer?
Yes, Refact.ai offers self-hosting options, including on-premise installation and private server deployment via AWS Marketplace. This provides complete code privacy with zero telemetry leaving the environment and allows for LLM fine-tuning on the organization's codebase.
Beyond code generation, what other development tasks can the Refact.ai Agent autonomously handle?
The Refact.ai Agent can plan, execute, and deploy coding tasks end-to-end. It can search and analyze repositories, connect with GitHub, databases, and CI/CD pipelines, and even debug and fix issues, as demonstrated by its ability to identify and resolve a WordPress plugin issue.