MiniMax AI - Claude-Style Coding Assistant

A powerful Claude-style AI coding assistant powered by MiniMax AI models. Get intelligent code analysis, generation, and interactive programming assistance right from your terminal.
🚀 Quick Start
npm install -g @minimax-ai/minimax
export HF_TOKEN=your_token_here
minimax --help
minimax analyze review myfile.py
minimax generate function "calculate fibonacci"
minimax chat start
✨ Features
- 🔍 Code Analysis: Review, explain, debug, and optimize your code
- 🛠️ Code Generation: Generate functions, classes, tests, and entire projects
- ✏️ Smart Editing: Modify and refactor code with AI assistance
- 💬 Interactive Mode: Chat-based coding assistance with context awareness
- 📁 Project Intelligence: Understand and work with entire codebases
- 🌍 Cross-Platform: Works on Windows, macOS, and Linux
- ⚡ Easy Installation: One command global installation via npm
📋 Prerequisites
- Node.js 14+ (for npm installation)
- Python 3.8+ (automatically managed)
- Hugging Face account with API access
📦 Installation
npm install -g @minimax-ai/minimax
npm install @minimax-ai/minimax
The installation automatically:
- ✅ Detects and validates Python 3.8+
- ✅ Creates an isolated Python environment
- ✅ Installs required Python dependencies
- ✅ Sets up the global
minimax
command
Option 3: Production Installation
Install from PyPI (when published):
pip install minimax-client
🔑 Setup
1. Get Your Hugging Face Token
- Visit https://huggingface.co/settings/tokens
- Create a new token with read access
- Set it as an environment variable:
export HF_TOKEN='your_token_here'
set HF_TOKEN=your_token_here
$env:HF_TOKEN='your_token_here'
2. Verify Installation
minimax --version
minimax --help
Usage
Command Line Interface
After installation, use the minimax-client
command:
minimax-client
minimax-client --message "Explain quantum computing in simple terms"
minimax-client --model "MiniMaxAI/MiniMax-M1-40k" --message "Hello, world!"
minimax-client --no-streaming --message "What is AI?"
minimax-client --provider "hf-inference-endpoints" --message "Tell me a joke"
minimax-client --version
Legacy Script
The original script is still available for backwards compatibility:
python minimax_client_legacy.py
Configuration
CLI Arguments
Argument |
Short |
Default |
Description |
--model |
-m |
MiniMaxAI/MiniMax-M1-80k |
Model name to use |
--message |
-msg |
What is the capital of France? |
User message to send |
--provider |
-p |
auto |
Inference provider to use |
--no-streaming |
| False |
Disable streaming (use regular completion) |
|
--verbose |
-v |
False |
Enable verbose logging |
--help |
-h |
| Show help message |
|
Environment Variables
Variable |
Required |
Description |
HF_TOKEN |
Yes |
Hugging Face API token |
MINIMAX_MODEL |
No |
Default model name (overridden by CLI) |
MINIMAX_PROVIDER |
No |
Default provider (overridden by CLI) |
Configuration Precedence
- CLI Arguments (highest priority)
- Environment Variables
- Default Values (lowest priority)
Examples
export MINIMAX_MODEL="MiniMaxAI/MiniMax-M1-40k"
minimax-client --message "Hello!"
export MINIMAX_MODEL="MiniMaxAI/MiniMax-M1-40k"
minimax-client --model "MiniMaxAI/MiniMax-M1-80k" --message "Hello!"
minimax-client \
--model "MiniMaxAI/MiniMax-M1-80k" \
--message "Explain the theory of relativity" \
--provider "hf-inference-endpoints" \
--verbose
Expected Output
The CLI tool will:
- ✓ Validate environment and configuration
- ✓ Initialize the InferenceClient with specified provider
- 🚀 Send request to the configured model
- 🔄 Display streaming response in real-time (if enabled)
- ✅ Show completion status with proper logging
Example output:
INFO - Starting MiniMax client with model: MiniMaxAI/MiniMax-M1-80k
INFO - Environment validation successful
INFO - InferenceClient initialized successfully
INFO - Sending message: What is the capital of France?
INFO - Streaming response:
--------------------------------------------------
The capital of France is Paris. It is the largest city in France and serves as the country's political, economic, and cultural center...
--------------------------------------------------
INFO - Chat completion successful
INFO - Client operation completed successfully
Package Structure
src/minimax_client/
├── __init__.py
├── main.py
├── config.py
├── environment.py
├── client.py
└── chat.py
Key Components
- Configuration Management (
config.py
): Handles CLI arguments, environment variables, and default values
- Environment Validation (
environment.py
): Validates HF_TOKEN and optional .env file loading
- Client Management (
client.py
): InferenceClient initialization with specific error handling
- Chat Processing (
chat.py
): Streaming and non-streaming chat completions with enhanced error recovery
- Main Orchestrator (
main.py
): Coordinates all components and provides CLI interface
Error Handling
Exit Codes
Code |
Category |
Description |
0 |
Success |
Operation completed successfully |
1 |
General Error |
Unspecified error occurred |
2 |
Configuration Error |
Invalid configuration or missing environment variables |
3 |
Network Error |
Network connectivity or timeout issues |
4 |
Authentication Error |
Invalid or missing API token |
5 |
Model Error |
Model-related issues (not found, gated, unavailable) |
Specific Exception Handling
Exception Type |
Exit Code |
Common Causes |
Solutions |
HfHubHTTPError |
3, 4, 5 |
HTTP errors (401, 404, 429, 500) |
Check token, model name, rate limits |
RepositoryNotFoundError |
5 |
Model doesn't exist |
Verify model name spelling |
GatedRepoError |
5 |
Model requires access approval |
Request access on Hugging Face |
InferenceTimeoutError |
5 |
Model temporarily unavailable |
Retry later or use different model |
BadRequestError |
5 |
Invalid parameters |
Check message format and model requirements |
ValueError |
4 |
Authentication issues |
Verify HF_TOKEN validity |
requests.exceptions.* |
3 |
Network problems |
Check internet connection |
Development
Setting Up Development Environment
git clone <repository-url>
cd MiniMax
python -m venv venv
source venv/bin/activate
pip install -e ".[dev]"
pre-commit install
Running Tests
pytest
pytest --cov=src/minimax_client
pytest tests/test_config.py
pytest -v
Project Structure
MiniMax/
├── src/minimax_client/
├── tests/
├── pyproject.toml
├── requirements.txt
├── README.md
├── minimax_client_legacy.py
└── .env.example
Contributing
- Fork the repository
- Create a feature branch:
git checkout -b feature-name
- Make your changes with proper tests
- Run the test suite:
pytest
- Submit a pull request
Code Style
- Follow PEP 8 guidelines
- Use type hints for all functions
- Add docstrings for public APIs
- Write unit tests for new functionality
- Use proper logging instead of print statements
API Documentation
Using as a Python Package
from minimax_client.config import Configuration
from minimax_client.client import initialize_client
from minimax_client.chat import create_chat_completion
# Create configuration
config = Configuration(
model_name="MiniMaxAI/MiniMax-M1-80k",
user_message="Hello, world!",
provider="auto",
streaming=True
)
# Initialize client
client = initialize_client(config)
# Create chat completion
create_chat_completion(client, config)
Configuration Class
class Configuration:
"""Configuration management for MiniMax client."""
def __init__(self, model_name: str = None, user_message: str = None,
provider: str = None, streaming: bool = None):
"""Initialize configuration with optional overrides."""
@classmethod
def from_args(cls, args: list = None) -> 'Configuration':
"""Create configuration from command line arguments."""
def validate(self) -> None:
"""Validate configuration values."""
Troubleshooting
Environment Issues
Problem |
Solution |
HF_TOKEN not found |
Set environment variable or create .env file |
Permission denied |
Check file permissions and virtual environment |
Module not found |
Install package with pip install -e . |
Authentication Issues
Problem |
Solution |
401 Unauthorized |
Verify HF_TOKEN is valid and not expired |
403 Forbidden |
Request access to gated models |
Invalid token format |
Check token doesn't have extra spaces/characters |
Model Issues
Problem |
Solution |
Model not found (404) |
Verify model name spelling and availability |
Model temporarily unavailable (503) |
Try again later or use different model |
Rate limit exceeded (429) |
Wait before retrying or upgrade API plan |
Model loading timeout |
Use smaller model or retry later |
Network Issues
Problem |
Solution |
Connection timeout |
Check internet connection and firewall |
DNS resolution failed |
Verify DNS settings |
SSL certificate error |
Update certificates or check system time |
Configuration Issues
Problem |
Solution |
Invalid provider |
Use 'auto', 'hf-inference-endpoints', or other valid providers |
Message too long |
Reduce message length or use model with larger context |
Invalid streaming parameter |
Use boolean values (True/False) |
Getting Help
- GitHub Issues: Report bugs and request features
- Documentation: Check this README and inline code documentation
- Hugging Face Forums: Community support for model-specific issues
- API Documentation: Hugging Face Inference API docs
License
This client library is provided under the MIT License. See LICENSE file for details.