Back to Hub

🤖 Simple AI Integration Methods

Quick & Easy Ways to Add AI to Your Python Admin Panel

1. OpenAI (ChatGPT) EASY RECOMMENDED

pip install openai
Python import openai openai.api_key = "sk-your-api-key-here" # Simple way - Get AI response user_input = "Ban all users from Russia who failed login 5+ times" response = openai.ChatCompletion.create( model="gpt-4", messages=[ {"role": "system", "content": "You are an admin panel AI assistant. Analyze commands and provide actions."}, {"role": "user", "content": user_input} ] ) ai_answer = response.choices[0].message.content print(ai_answer) # Output: "I'll ban users matching: country='Russia' AND failed_logins >= 5. Execute query: UPDATE users SET banned=1 WHERE..." # Use in admin panel def analyze_user_behavior(user_data): prompt = f"Analyze this user: {user_data}. Should I ban, monitor, or approve?" response = openai.ChatCompletion.create( model="gpt-3.5-turbo", # Cheaper than GPT-4 messages=[{"role": "user", "content": prompt}] ) return response.choices[0].message.content # Example user = {"failed_logins": 15, "account_age": 1, "suspicious_ips": 8} decision = analyze_user_behavior(user) print(decision) # AI recommends action

✓ Advantages

  • Extremely easy to use (3 lines of code!)
  • Best AI quality available
  • Massive knowledge base
  • Great documentation and community
  • Works immediately, no setup
  • Can understand complex requests

✗ Disadvantages

  • Costs money per request
  • Requires internet connection
  • Slower (1-3 seconds per request)
  • Data sent to OpenAI servers
  • Rate limits (3-5k requests/min)
💰 Pricing: GPT-3.5: $0.0005-$0.0015 per 1K tokens (~750 words) | GPT-4: $0.03-$0.06 per 1K tokens
Example: 1000 AI queries/day ≈ $15-30/month (GPT-3.5) or $300-600/month (GPT-4)

🎯 Perfect For:

  • Natural language commands: "Show me suspicious users from yesterday"
  • Smart analysis: Security event analysis, log interpretation
  • Auto-responses: Support ticket replies, user notifications
  • Code generation: Generate SQL queries, API calls from text
  • Report writing: Auto-generate security reports, summaries

2. Anthropic Claude EASY BEST FOR SECURITY

pip install anthropic
Python import anthropic client = anthropic.Anthropic(api_key="sk-ant-your-key") # Simple usage user_activity = "User tried 50 different passwords in 2 minutes from 10 different IPs" response = client.messages.create( model="claude-sonnet-4-20250514", max_tokens=1024, messages=[ {"role": "user", "content": f"Analyze this activity and tell me if it's an attack: {user_activity}"} ] ) ai_response = response.content[0].text print(ai_response) # Output: "This is clearly a brute force attack. Immediate actions: 1. Ban user, 2. Block IPs..." # Use for security analysis def analyze_security_event(event_data): prompt = f"""Security Event Analysis: Event: {event_data['type']} User: {event_data['user_id']} IP: {event_data['ip']} Details: {event_data['details']} Provide: Risk Level (Low/Medium/High/Critical), Threat Type, Recommended Action""" message = client.messages.create( model="claude-sonnet-4-20250514", max_tokens=512, messages=[{"role": "user", "content": prompt}] ) return message.content[0].text # Example event = { "type": "multiple_failed_logins", "user_id": "user_123", "ip": "192.168.1.100", "details": "15 failed attempts in 5 minutes" } analysis = analyze_security_event(event) print(analysis)

✓ Advantages

  • Best for security and reasoning tasks
  • More accurate analysis than GPT
  • Longer context (200K tokens!)
  • Better at following instructions
  • More ethical/careful responses
  • Super easy to use

✗ Disadvantages

  • Costs money (similar to OpenAI)
  • Requires internet
  • Slightly slower than GPT-3.5
  • Less popular = less tutorials
  • Data goes to Anthropic servers
💰 Pricing: Claude Sonnet: $3 per million input tokens, $15 per million output tokens
Example: 1000 security analyses/day ≈ $20-40/month

🎯 Perfect For:

  • Security incident analysis and threat detection
  • Complex reasoning about user behavior
  • Long document analysis (logs, reports)
  • Ethical decision making (ban/unban users)
  • Code review and vulnerability detection

3. Google Gemini EASY FREE TIER

pip install google-generativeai
Python import google.generativeai as genai genai.configure(api_key="your-google-api-key") model = genai.GenerativeModel('gemini-pro') # Super simple usage user_query = "Analyze failed login attempts and suggest security rules" response = model.generate_content(user_query) print(response.text) # Use in admin panel def get_ai_recommendation(data): prompt = f"User data: {data}. Recommend action: BAN, MONITOR, or APPROVE?" response = model.generate_content(prompt) return response.text # Example user_data = "New user, 20 failed logins, VPN detected, suspicious country" action = get_ai_recommendation(user_data) print(action) # AI suggests BAN # Multi-modal: Analyze screenshots! def analyze_screenshot(image_path): import PIL.Image img = PIL.Image.open(image_path) response = model.generate_content([ "What security issues do you see in this admin panel screenshot?", img ]) return response.text

✓ Advantages

  • FREE tier available! (60 requests/min)
  • Can analyze images and screenshots
  • Very fast responses
  • Easy to use like GPT
  • Good at data analysis
  • Integrated with Google Cloud

✗ Disadvantages

  • Not as good as GPT-4 or Claude
  • Sometimes gives inconsistent answers
  • Limited free tier (60/min)
  • Data goes to Google
  • Newer, fewer examples online
💰 Pricing: FREE up to 60 requests/min! Paid: $0.00025 per 1K chars (very cheap!)
Example: 1000 queries/day = FREE if under 60/min, or ~$5/month if more

🎯 Perfect For:

  • Budget-conscious projects (FREE tier!)
  • Image analysis (screenshots, user uploads)
  • Quick data analysis and summaries
  • Testing AI features before paying
  • High-frequency, simple tasks

4. Ollama (Local LLM) MEDIUM FREE

# Install Ollama from: ollama.ai # Then: ollama pull llama2 pip install ollama
Python import ollama # Simple usage - runs on YOUR computer! response = ollama.chat( model='llama2', messages=[ {'role': 'user', 'content': 'Analyze this user activity: 50 failed logins in 1 hour'} ] ) print(response['message']['content']) # Use in admin panel def analyze_locally(user_data): prompt = f"""User Analysis: Failed Logins: {user_data['failed_logins']} Account Age: {user_data['account_age']} days Suspicious IPs: {user_data['suspicious_ips']} Should I: BAN, MONITOR, or APPROVE? Explain briefly.""" response = ollama.chat( model='llama2', # or 'mistral', 'codellama', 'phi' messages=[{'role': 'user', 'content': prompt}] ) return response['message']['content'] # Example user = {"failed_logins": 25, "account_age": 1, "suspicious_ips": 12} decision = analyze_locally(user) print(decision) # Stream responses for real-time feedback def stream_analysis(query): stream = ollama.chat( model='llama2', messages=[{'role': 'user', 'content': query}], stream=True ) for chunk in stream: print(chunk['message']['content'], end='', flush=True)

✓ Advantages

  • 100% FREE - no API costs!
  • Complete privacy - data never leaves your PC
  • Works offline (no internet needed)
  • No rate limits
  • Many models: Llama2, Mistral, CodeLlama
  • Easy to use like OpenAI

✗ Disadvantages

  • Requires good computer (8GB+ RAM)
  • Slower than cloud APIs
  • Not as smart as GPT-4 or Claude
  • First response takes time to load
  • You manage everything (updates, bugs)
💰 Pricing: FREE FOREVER! Just electricity costs :)
Hardware: Minimum 8GB RAM for Llama2-7B, 16GB+ recommended for better models

🎯 Perfect For:

  • High-security environments (no data leaves your network)
  • Unlimited queries without costs
  • Offline admin panels
  • Learning and experimenting
  • Simple analysis tasks

📦 Available Models (Just download and use!)

  • 🦙 llama2 - General purpose, best for reasoning
  • mistral - Fast and smart, good balance
  • 💻 codellama - Best for code analysis
  • 🔬 phi - Tiny but capable (3GB model!)
  • 🎯 neural-chat - Conversational AI

5. Hugging Face 🤗 MEDIUM FREE API

pip install huggingface_hub requests
Python import requests API_URL = "https://api-inference.huggingface.co/models/mistralai/Mistral-7B-Instruct-v0.1" headers = {"Authorization": "Bearer hf_your_token"} def query(payload): response = requests.post(API_URL, headers=headers, json=payload) return response.json() # Simple usage user_input = "Analyze: User has 30 failed logins. Should I ban them?" output = query({"inputs": user_input}) print(output[0]['generated_text']) # Use in admin panel def ai_security_check(event_description): prompt = f"Security Event: {event_description}\nRisk Level (Low/Medium/High):" result = query({ "inputs": prompt, "parameters": {"max_new_tokens": 100, "temperature": 0.7} }) return result[0]['generated_text'] # Example event = "User from unknown country tried 100 passwords in 5 minutes" analysis = ai_security_check(event) print(analysis) # Sentiment analysis (for support tickets) sentiment_api = "https://api-inference.huggingface.co/models/distilbert-base-uncased-finetuned-sst-2-english" def analyze_ticket_sentiment(ticket_text): result = query({"inputs": ticket_text}) return result # Returns: [{'label': 'NEGATIVE', 'score': 0.95}] ticket = "This app is terrible! It crashes every time!" sentiment = analyze_ticket_sentiment(ticket) print(f"Sentiment: {sentiment[0]['label']}") # NEGATIVE

✓ Advantages

  • FREE API tier (rate limited)
  • Thousands of models to choose from
  • Specialized models (sentiment, NER, etc)
  • Can download and run locally too
  • Great for specific tasks
  • Open source community

✗ Disadvantages

  • Not as smart as GPT-4/Claude
  • Free tier is slow (cold starts)
  • Limited models on free tier
  • More complex to use
  • Requires understanding different models
💰 Pricing: FREE tier available! Paid: $0.06-$1 per 1M tokens depending on model
Example: Small tasks = FREE, Heavy usage = $10-50/month

🎯 Perfect For:

  • Specific tasks: sentiment analysis, classification
  • Testing different AI models
  • Budget-friendly AI (free tier!)
  • Open-source enthusiasts
  • Learning about different AI models

6. Cohere EASY FREE TRIAL

pip install cohere
Python import cohere co = cohere.Client('your-api-key') # Simple usage response = co.generate( prompt="Analyze this user behavior and suggest action: User tried 40 passwords in 2 minutes", max_tokens=200 ) print(response.generations[0].text) # Use in admin panel def classify_security_event(event_text): response = co.classify( model='embed-english-v2.0', inputs=[event_text], examples=[ ("50 failed logins from same IP", "attack"), ("User logged in from new device", "suspicious"), ("Normal app usage", "normal"), ("Brute force attempt detected", "attack"), ] ) return response.classifications[0].prediction # Example event = "Multiple password attempts from different countries simultaneously" classification = classify_security_event(event) print(f"Event type: {classification}") # "attack" # Semantic search in logs def search_similar_events(query, event_log): response = co.embed( texts=[query] + event_log, model='embed-english-v2.0' ) # Find most similar events (simplified) embeddings = response.embeddings # ... calculate similarity and return matches return "Similar events found in logs"

✓ Advantages

  • Great for classification tasks
  • Excellent semantic search
  • FREE trial (no credit card!)
  • Fast API responses
  • Good for text embeddings
  • Simple to integrate

✗ Disadvantages

  • Not as powerful as GPT-4
  • Limited to specific tasks
  • Smaller community
  • Paid after trial
  • Less flexible than ChatGPT
💰 Pricing: FREE trial with 100 API calls! Paid: $1-2 per 1K requests
Example: 1000 classifications/day ≈ $30-60/month

🎯 Perfect For:

  • Classifying security events into categories
  • Semantic search in logs and tickets
  • Text embeddings for similarity matching
  • Automated ticket routing
  • Content categorization

7. Replicate EASY

pip install replicate
Python import replicate # Simple usage - run any AI model! output = replicate.run( "meta/llama-2-70b-chat:latest", input={ "prompt": "Analyze this security event: 100 failed logins in 1 minute. What should I do?", "max_tokens": 200 } ) print(output) # Use in admin panel def analyze_with_ai(user_data): prompt = f"""Security Analysis Request: User ID: {user_data['id']} Failed Logins: {user_data['failed_logins']} Suspicious Activity: {user_data['activity']} Recommend: BAN / MONITOR / CLEAR""" output = replicate.run( "meta/llama-2-13b-chat:latest", input={"prompt": prompt} ) return "".join(output) # Example user = { "id": "user_123", "failed_logins": 45, "activity": "Multiple IPs, VPN usage, pattern matching bot" } recommendation = analyze_with_ai(user) print(recommendation) # Image analysis (screenshots, uploads) def analyze_screenshot(image_url): output = replicate.run( "salesforce/blip:latest", input={"image": image_url, "task": "image_captioning"} ) return output

✓ Advantages

  • Run ANY AI model easily
  • Access to latest open-source models
  • Pay only for what you use
  • No server setup required
  • Image/video/audio AI available
  • Very simple API

✗ Disadvantages

  • Can be expensive for heavy use
  • Slower than dedicated APIs
  • Cold start delays
  • Less reliable than OpenAI
  • Requires payment setup
💰 Pricing: Pay-per-second model usage. Example: Llama2-70B = $0.0008/second
Example: 100 requests/day (5 sec each) ≈ $12/month

🎯 Perfect For:

  • Testing different AI models quickly
  • Image/video analysis tasks
  • Using latest open-source models
  • Low-volume AI tasks
  • Specialized AI models

8. Mistral AI EASY FAST & CHEAP

pip install mistralai
Python from mistralai.client import MistralClient from mistralai.models.chat_completion import ChatMessage client = MistralClient(api_key="your-api-key") # Simple usage messages = [ ChatMessage(role="user", content="Analyze: User failed 30 logins. Ban or monitor?") ] response = client.chat( model="mistral-medium", messages=messages ) print(response.choices[0].message.content) # Use in admin panel def security_analysis(event_data): prompt = f"""Security Event Analysis: Type: {event_data['type']} User: {event_data['user']} Details: {event_data['details']} Provide: Risk Score (0-100), Action (BAN/MONITOR/ALLOW), Reason""" messages = [ChatMessage(role="user", content=prompt)] response = client.chat( model="mistral-small", # or "mistral-medium" or "mistral-large" messages=messages ) return response.choices[0].message.content # Example event = { "type": "brute_force_attempt", "user": "attacker_user", "details": "50 failed logins from Russian IP using Tor" } analysis = security_analysis(event) print(analysis) # Function calling (execute admin commands) def ai_execute_command(natural_language_command): tools = [ { "type": "function", "function": { "name": "ban_user", "description": "Ban a user from the system", "parameters": { "type": "object", "properties": { "user_id": {"type": "string"}, "reason": {"type": "string"} } } } } ] messages = [ChatMessage(role="user", content=natural_language_command)] response = client.chat( model="mistral-large-latest", messages=messages, tools=tools, tool_choice="auto" ) return response

✓ Advantages

  • VERY fast responses (1-2 seconds)
  • Cheaper than GPT-4
  • Good quality reasoning
  • Function calling support
  • European company (GDPR compliant)
  • Simple API like OpenAI

✗ Disadvantages

  • Not quite as good as GPT-4
  • Smaller context window than Claude
  • Newer, fewer examples online
  • Paid service (no free tier)
  • Limited to text (no images)
💰 Pricing: Mistral Small: $0.002/1K tokens | Medium: $0.0065/1K | Large: $0.024/1K
Example: 1000 requests/day with Small ≈ $6-12/month (cheaper than GPT!)

🎯 Perfect For:

  • Fast security analysis (real-time checks)
  • Budget-friendly AI (cheaper than GPT-4)
  • GDPR/European compliance needs
  • Function calling (execute commands)
  • High-frequency tasks

📊 Quick Comparison Table

Method Difficulty Cost/Month Quality Speed Best Use
OpenAI GPT ⭐ Easy $15-300 ⭐⭐⭐⭐⭐ Fast General AI tasks
Claude ⭐ Easy $20-200 ⭐⭐⭐⭐⭐ Fast Security analysis
Google Gemini ⭐ Easy $0-10 ⭐⭐⭐⭐ Very Fast FREE tier, images
Ollama ⭐⭐ Medium $0 FREE ⭐⭐⭐ Slower Privacy, offline
Hugging Face ⭐⭐ Medium $0-50 ⭐⭐⭐ Slow (free) Specific tasks
Cohere ⭐ Easy $0-60 ⭐⭐⭐ Fast Classification
Replicate ⭐ Easy $10-100 ⭐⭐⭐⭐ Medium Any AI model
Mistral AI ⭐ Easy $6-100 ⭐⭐⭐⭐ Very Fast Fast & cheap

💡 My Recommendations for hx7 Admin Panel

🏆 Option 1: Best Overall (Recommended for Most)

Use: OpenAI GPT-3.5 Turbo + Ollama

  • ✓ OpenAI for complex analysis, user questions, reports ($15-50/month)
  • ✓ Ollama (local) for simple checks, real-time analysis (FREE)
  • ✓ Total cost: ~$15-50/month
  • ✓ Best balance of quality, cost, and privacy

💰 Option 2: Maximum Security & Privacy

Use: Ollama Only (100% Local)

  • ✓ All AI runs on your server
  • ✓ No data leaves your network
  • ✓ FREE forever
  • ✗ Requires good hardware (16GB+ RAM)

🚀 Option 3: Best for Startups (Budget)

Use: Google Gemini + Mistral AI

  • ✓ Gemini FREE tier for most tasks
  • ✓ Mistral for complex cases (cheap)
  • ✓ Total cost: ~$5-20/month
  • ✓ Good quality at minimal cost

⚡ Option 4: Professional/Enterprise

Use: Claude Sonnet 4 + OpenAI GPT-4

  • ✓ Claude for security analysis (best reasoning)
  • ✓ GPT-4 for complex questions, reports
  • ✓ Total cost: ~$100-300/month
  • ✓ Highest quality AI available

🎯 Step-by-Step: Add AI to Your Admin Panel NOW

Step 1: Choose Your Method (5 minutes)

Bash # For OpenAI (Recommended): pip install openai # For Claude: pip install anthropic # For Local/Free: # Download from ollama.ai, then: ollama pull llama2 # For Google Gemini: pip install google-generativeai

Step 2: Create Simple AI Helper (10 minutes)

Python # ai_helper.py import openai class AdminAI: def __init__(self, api_key): openai.api_key = api_key def analyze(self, data, question): """Simple AI analysis""" prompt = f"Data: {data}\n\nQuestion: {question}\n\nAnswer:" response = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": prompt}] ) return response.choices[0].message.content # Usage in your admin panel ai = AdminAI("your-api-key") # Example 1: Analyze user user_data = {"failed_logins": 30, "account_age": 2} result = ai.analyze(user_data, "Should I ban this user?") print(result) # Example 2: Analyze security event event = {"type": "brute_force", "ip": "192.168.1.100"} result = ai.analyze(event, "What type of attack is this?") print(result)

Step 3: Integrate into Your Admin Panel (15 minutes)

Python # In your admin panel code: from ai_helper import AdminAI ai = AdminAI("sk-your-key") # When viewing user details: def check_user(user_id): user = get_user_from_database(user_id) # Get AI recommendation ai_advice = ai.analyze( data=user, question="Analyze this user's activity. Should I ban, monitor, or approve?" ) # Show to admin print(f"User: {user['username']}") print(f"AI Recommendation: {ai_advice}") return ai_advice # When reviewing security events: def analyze_security_event(event_id): event = get_event_from_database(event_id) ai_analysis = ai.analyze( data=event, question="What kind of attack is this? How severe? What should I do?" ) return ai_analysis

✅ You're Done! Now You Can:

  • Ask AI to analyze any user: "Is this user suspicious?"
  • Get security recommendations: "What type of attack is this?"
  • Auto-generate reports: "Summarize today's security events"
  • Smart search: "Find users with suspicious behavior"
  • Natural language commands: "Ban all Russian IPs"

💵 Cost Calculator: How Much Will AI Cost You?

📊 Example Scenarios:

Small Admin Panel (100 queries/day):
  • OpenAI GPT-3.5: ~$5-10/month
  • Claude Sonnet: ~$8-12/month
  • Google Gemini: FREE
  • Ollama (local): FREE
Medium Admin Panel (500 queries/day):
  • OpenAI GPT-3.5: ~$25-50/month
  • Claude Sonnet: ~$40-60/month
  • Mistral AI: ~$15-30/month
  • Google Gemini: $5-10/month
Large Admin Panel (2000+ queries/day):
  • OpenAI GPT-4: ~$500-1000/month
  • Claude Sonnet: ~$200-400/month
  • Hybrid (Ollama + API): ~$50-150/month
  • Ollama only: FREE (but need good server)

💡 Money-Saving Tips:

  • Use GPT-3.5 instead of GPT-4 (20x cheaper, still good!)
  • Use Ollama for simple tasks, API for complex ones
  • Cache AI responses for repeated questions
  • Use Google Gemini's free tier when possible
  • Batch multiple questions into one request