Skip to main content

Input Formats

Learn about the supported input formats for Rustellar AI models.

Request Structure

All API requests follow the OpenAI-compatible format for easy integration.

Basic Request Format

{
"model": "helix-v1",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
],
"temperature": 0.7,
"max_tokens": 1000
}

Message Roles

System

System messages set the behavior and context for the AI model.

{
"role": "system",
"content": "You are a helpful assistant specialized in programming."
}

User

User messages represent the input from the end user.

{
"role": "user",
"content": "What is Python?"
}

Assistant

Assistant messages represent previous responses from the AI model. Used for multi-turn conversations.

{
"role": "assistant",
"content": "Python is a high-level programming language..."
}

Parameters

Required Parameters

ParameterTypeDescription
modelstringThe model to use (e.g., "helix-v1", "iroha")
messagesarrayArray of message objects with role and content

Optional Parameters

ParameterTypeDefaultDescription
temperaturefloat0.7Controls randomness (0.0 - 2.0)
max_tokensinteger2048Maximum tokens in the response
top_pfloat1.0Nucleus sampling parameter
frequency_penaltyfloat0.0Reduces repetition (-2.0 to 2.0)
presence_penaltyfloat0.0Encourages new topics (-2.0 to 2.0)
stopstring or arraynullSequences where generation stops

Example Requests

Simple Conversation

import requests

# シンプルな会話リクエスト
response = requests.post(
"https://api.rustellar.com/v1/chat/completions",
headers={
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
},
json={
"model": "helix-v1",
"messages": [
{"role": "user", "content": "Tell me a joke"}
]
}
)

print(response.json())

Multi-turn Conversation

# 複数ターンの会話履歴を含むリクエスト
response = requests.post(
"https://api.rustellar.com/v1/chat/completions",
headers={
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
},
json={
"model": "helix-v1",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is 2+2?"},
{"role": "assistant", "content": "2+2 equals 4."},
{"role": "user", "content": "What about 3+3?"}
],
"temperature": 0.7,
"max_tokens": 100
}
)

print(response.json())

Story Generation with Iroha

# Irohaモデルでストーリー生成
response = requests.post(
"https://api.rustellar.com/v1/chat/completions",
headers={
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
},
json={
"model": "iroha",
"messages": [
{
"role": "system",
"content": "You are a creative storyteller."
},
{
"role": "user",
"content": "Write a short story about a space explorer."
}
],
"temperature": 0.9, # より創造的な出力のため高めに設定
"max_tokens": 2000
}
)

print(response.json())

Best Practices

  1. Use system messages to set consistent behavior
  2. Keep conversation history for context in multi-turn dialogues
  3. Adjust temperature based on use case:
    • Lower (0.1-0.5) for factual, consistent responses
    • Higher (0.7-1.2) for creative content
  4. Set appropriate max_tokens to control response length and costs
  5. Use stop sequences to prevent unwanted continuation

Next Steps