> [!NOTE] Quick Start
> 1. Get your API key from [Anthropic Console](https://console.anthropic.com)
> 2. Store it securely (never in plain text or public repos)
> 3. Make API calls using your preferred method
> 4. Monitor usage to stay within rate limits
# Claude API Cheat Sheet 🤖
## Available Models
### Claude 3 Family (Latest)
- **Claude 3 Opus**
- Most powerful model
- Context: 200K tokens
- Best for: Complex research, advanced coding, strategic analysis
- Cost:
- Input: $15/1M tokens
- Output: $75/1M tokens
- Cache Write: $18.75/1M tokens
- Cache Read: $1.50/1M tokens
- **Claude 3 Sonnet**
- Balanced performance and cost
- Context: 200K tokens
- Best for: Multi-step workflows, customer support, coding
- Cost:
- Input: $3/1M tokens
- Output: $15/1M tokens
- Cache Write: $3.75/1M tokens
- Cache Read: $0.30/1M tokens
## API Endpoints
### Messages API (Recommended)
```bash
POST https://api.anthropic.com/v1/messages
```
```json
{
"model": "claude-3-opus-20240229",
"messages": [
{
"role": "user",
"content": "Hello!"
}
],
"max_tokens": 1024
}
```
### System Prompts
```json
{
"model": "claude-3-sonnet-20240229",
"messages": [
{
"role": "system",
"content": "You are a helpful AI assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}
```
## Common Parameters
### Message Parameters
- `model`: Model identifier (e.g., "claude-3-opus-20240229")
- `messages`: Array of message objects
- `max_tokens`: Maximum response length
- `temperature`: Randomness (0-1)
- `top_p`: Nucleus sampling
- `top_k`: Top-k sampling
- `stream`: Enable streaming responses
### Message Object Structure
```json
{
"role": "user|assistant|system",
"content": "Message text or array of content blocks"
}
```
### Content Blocks
```json
{
"type": "text|image",
"text": "Text content",
"source": {
"type": "base64",
"media_type": "image/jpeg",
"data": "base64_encoded_image_data"
}
}
```
## Python SDK Usage
### Basic Example
```python
from anthropic import Anthropic
client = Anthropic()
message = client.messages.create(
model="claude-3-opus-20240229",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(message.content)
```
### Streaming Example
```python
from anthropic import Anthropic
client = Anthropic()
with client.messages.stream(
model="claude-3-opus-20240229",
messages=[{"role": "user", "content": "Write a story"}]
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
```
## Best Practices
### Security
1. **API Key Management**
- Store in environment variables
- Use secure vaults in production
- Rotate keys periodically
- Monitor usage for unauthorized access
2. **Request Validation**
- Validate input before sending
- Set appropriate max_tokens
- Implement rate limiting
- Handle errors gracefully
### Performance
1. **Optimization**
- Use streaming for long responses
- Implement caching when possible
- Batch requests when appropriate
- Monitor token usage
2. **Cost Management**
- Set token limits
- Use appropriate models
- Implement usage monitoring
- Cache frequent requests
## Error Handling
### Common Error Codes
- 401: Invalid API key
- 429: Rate limit exceeded
- 500: Server error
- 503: Service unavailable
### Error Response Format
```json
{
"error": {
"type": "invalid_request_error",
"message": "Error details here"
}
}
```
## Rate Limits
### Default Limits
- Requests per minute: Varies by tier
- Concurrent requests: Varies by tier
- Contact Anthropic for increased limits
### Best Practices
- Implement exponential backoff
- Cache responses when possible
- Monitor usage with headers
- Set user-level rate limits
## Related Resources
- [Official Documentation](https://docs.anthropic.com/claude/reference)
- [Python SDK Documentation](https://github.com/anthropics/anthropic-sdk-python)
- [API Reference](https://docs.anthropic.com/claude/reference/getting-started-with-the-api)
- [Claude Examples](https://github.com/anthropics/anthropic-cookbook)