Input Formats
Learn about the supported input formats for Rustellar AI models.
Request Structure
All API requests follow the OpenAI-compatible format for easy integration.
Basic Request Format
{
"model": "helix-v1",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
],
"temperature": 0.7,
"max_tokens": 1000
}
Message Roles
System
System messages set the behavior and context for the AI model.
{
"role": "system",
"content": "You are a helpful assistant specialized in programming."
}
User
User messages represent the input from the end user.
{
"role": "user",
"content": "What is Python?"
}
Assistant
Assistant messages represent previous responses from the AI model. Used for multi-turn conversations.
{
"role": "assistant",
"content": "Python is a high-level programming language..."
}
Parameters
Required Parameters
| Parameter | Type | Description |
|---|---|---|
model | string | The model to use (e.g., "helix-v1", "iroha") |
messages | array | Array of message objects with role and content |
Optional Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
temperature | float | 0.7 | Controls randomness (0.0 - 2.0) |
max_tokens | integer | 2048 | Maximum tokens in the response |
top_p | float | 1.0 | Nucleus sampling parameter |
frequency_penalty | float | 0.0 | Reduces repetition (-2.0 to 2.0) |
presence_penalty | float | 0.0 | Encourages new topics (-2.0 to 2.0) |
stop | string or array | null | Sequences where generation stops |
Example Requests
Simple Conversation
import requests
# シンプルな会話リクエスト
response = requests.post(
"https://api.rustellar.com/v1/chat/completions",
headers={
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
},
json={
"model": "helix-v1",
"messages": [
{"role": "user", "content": "Tell me a joke"}
]
}
)
print(response.json())
Multi-turn Conversation
# 複数ターンの会話履歴を含むリクエスト
response = requests.post(
"https://api.rustellar.com/v1/chat/completions",
headers={
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
},
json={
"model": "helix-v1",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is 2+2?"},
{"role": "assistant", "content": "2+2 equals 4."},
{"role": "user", "content": "What about 3+3?"}
],
"temperature": 0.7,
"max_tokens": 100
}
)
print(response.json())
Story Generation with Iroha
# Irohaモデルでストーリー生成
response = requests.post(
"https://api.rustellar.com/v1/chat/completions",
headers={
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
},
json={
"model": "iroha",
"messages": [
{
"role": "system",
"content": "You are a creative storyteller."
},
{
"role": "user",
"content": "Write a short story about a space explorer."
}
],
"temperature": 0.9, # より創造的な出力のため高めに設定
"max_tokens": 2000
}
)
print(response.json())
Best Practices
- Use system messages to set consistent behavior
- Keep conversation history for context in multi-turn dialogues
- Adjust temperature based on use case:
- Lower (0.1-0.5) for factual, consistent responses
- Higher (0.7-1.2) for creative content
- Set appropriate max_tokens to control response length and costs
- Use stop sequences to prevent unwanted continuation
Next Steps
- Learn about Output Formats
- Check Rate Limits
- Explore Model Options