Helix v1
Helix v1 is Rustellar's flagship conversational AI model, designed for general-purpose dialogue and natural language understanding.
Overview
Helix v1 is built on the proven Transformer Decoder-only architecture, making it exceptionally capable at understanding context and generating coherent, natural responses across a wide range of conversational scenarios.
Key Features
General Conversation Excellence
Helix v1 excels at:
- Natural dialogue and chat applications
- Question answering systems
- Customer support automation
- Personal assistants
- Educational tutoring
Technical Specifications
| Specification | Value |
|---|---|
| Architecture | Transformer Decoder-only |
| Context Window | 8,192 tokens |
| Maximum Output | 4,096 tokens |
| Languages | 95+ languages |
| Training Cutoff | January 2025 |
Use Cases
Customer Support Chatbot
import requests
# カスタマーサポート用の設定
response = requests.post(
"https://api.rustellar.com/v1/chat/completions",
headers={
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
},
json={
"model": "helix-v1",
"messages": [
{
"role": "system",
"content": "You are a helpful customer support agent. "
"Be polite, professional, and provide clear solutions."
},
{
"role": "user",
"content": "I can't log in to my account. What should I do?"
}
],
"temperature": 0.7, # バランスの取れた応答
"max_tokens": 500
}
)
print(response.json()['choices'][0]['message']['content'])
Educational Q&A
# 教育用Q&Aシステムの例
response = requests.post(
"https://api.rustellar.com/v1/chat/completions",
headers={
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
},
json={
"model": "helix-v1",
"messages": [
{
"role": "system",
"content": "You are an educational tutor. "
"Explain concepts clearly with examples."
},
{
"role": "user",
"content": "Explain photosynthesis in simple terms."
}
],
"temperature": 0.5, # より一貫した説明のため低めに設定
"max_tokens": 800
}
)
print(response.json()['choices'][0]['message']['content'])
Code Assistant
# プログラミングアシスタントの例
response = requests.post(
"https://api.rustellar.com/v1/chat/completions",
headers={
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
},
json={
"model": "helix-v1",
"messages": [
{
"role": "system",
"content": "You are an expert programming assistant. "
"Provide clear code examples with explanations."
},
{
"role": "user",
"content": "How do I read a CSV file in Python?"
}
],
"temperature": 0.3, # 正確なコード生成のため低めに設定
"max_tokens": 1000
}
)
print(response.json()['choices'][0]['message']['content'])
Performance Characteristics
Response Speed
Helix v1 is optimized for fast response times, typically generating completions in:
- Short responses (< 100 tokens): 0.5-1 second
- Medium responses (100-500 tokens): 1-3 seconds
- Long responses (500+ tokens): 3-8 seconds
Accuracy
Helix v1 demonstrates high accuracy in:
- ✅ Factual question answering
- ✅ Task completion and instructions following
- ✅ Multi-turn conversation coherence
- ✅ Context understanding and retention
Best Practices
Temperature Settings
Choose temperature based on your use case:
# 事実に基づく応答(低温度)
# 推奨: 0.1 - 0.5
{
"temperature": 0.3,
"use_cases": ["Q&A", "Code generation", "Technical support"]
}
# バランスの取れた応答(中温度)
# 推奨: 0.6 - 0.8
{
"temperature": 0.7,
"use_cases": ["Customer service", "General chat", "Tutoring"]
}
# 創造的な応答(高温度)
# 推奨: 0.9 - 1.2
{
"temperature": 1.0,
"use_cases": ["Creative writing", "Brainstorming", "Ideation"]
}
System Message Guidelines
Craft effective system messages:
# ❌ 悪い例: 曖昧で具体性がない
"You are helpful."
# ✅ 良い例: 明確で具体的
"You are a professional customer service agent for TechCorp. "
"Always be polite, empathetic, and solution-focused. "
"If you don't know the answer, offer to escalate to a human agent."
Context Management
For long conversations, manage context effectively:
def manage_conversation_context(messages, max_tokens=6000):
"""
会話コンテキストを管理する関数
最大トークン数を超えないように古いメッセージを削除
"""
# トークン数を概算(実際にはトークナイザーを使用)
estimated_tokens = sum(len(m['content']) // 4 for m in messages)
if estimated_tokens > max_tokens:
# システムメッセージは保持
system_msgs = [m for m in messages if m['role'] == 'system']
# 最新のユーザー/アシスタントメッセージのみ保持
recent_msgs = messages[-20:] # 最新20メッセージ
return system_msgs + [m for m in recent_msgs if m['role'] != 'system']
return messages
# 使用例
conversation_history = [
{"role": "system", "content": "You are a helpful assistant."},
# ... 多数のメッセージ ...
]
# コンテキストをトリミング
trimmed_messages = manage_conversation_context(conversation_history)
# APIリクエスト
response = requests.post(
"https://api.rustellar.com/v1/chat/completions",
headers={
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
},
json={
"model": "helix-v1",
"messages": trimmed_messages
}
)
Pricing
Helix v1 uses token-based pricing:
| Token Type | Price per 1M tokens |
|---|---|
| Input | $0.50 |
| Output | $1.50 |
See our Pricing page for more details.
Comparison with Iroha
| Feature | Helix v1 | Iroha |
|---|---|---|
| Best for | General conversation | Story generation |
| Architecture | Transformer | Mamba |
| Context Window | 8,192 tokens | 16,384 tokens |
| Response Speed | Faster | Moderate |
| Creativity | Moderate | High |
Limitations
While Helix v1 is highly capable, be aware of these limitations:
- Knowledge Cutoff: Training data up to January 2025
- Context Window: Limited to 8,192 tokens
- Factual Accuracy: May occasionally produce incorrect information
- Calculation: Not optimized for complex mathematical calculations
- Real-time Data: No access to current events or live data
Support
For technical support or questions about Helix v1:
- Email: support@rustellar.com
- Documentation: platform.rustellar.com/docs