GenAIHub
← Back to Technical Section

Client-Server Connectivity

Understanding HTTPS, WebSocket, and real-time communication for AI chatbots

How Client-Server Communication Works

Modern GenAI applications like chatbots require secure, real-time communication between the client (browser/app) and server. This involves establishing trusted connections via HTTPS/TLS and using streaming protocols like WebSocket or Server-Sent Events (SSE) for real-time responses.

πŸ”’

HTTPS/TLS

Encryption

πŸ”„

WebSocket

Bidirectional

πŸ“‘

SSE

Server Push

⚑

REST API

Request/Response

How HTTPS Works (Step-by-Step)

HTTPS = HTTP + TLS/SSL Security. It ensures secure communication between Client (Browser) and Server using encryption.

CLIENT SERVER πŸ’» ☁️ 1. TCP HANDSHAKE TCP SYN β†’ ← TCP SYN + ACK TCP ACK β†’ βœ… CONNECTION ESTABLISHED 2. CERT CHECK πŸ”‘ Public Key πŸ” Private Key Client Hello β†’ ← Server Hello ← Certificate ← Server Hello Done ASYMMETRIC ENCRYPTION 3. KEY EXCHANGE πŸ”‘ Session Key Client Key Exchange β†’ Change Cipher Spec β†’ Finished β†’ ← Change Cipher Spec ← Finished ENCRYPTED SESSION KEY βœ… TLS CONNECTION ESTABLISHED 4. SECURE DATA πŸ”’ Encrypted Request β†’ ← πŸ”’ Encrypted Response πŸ”’ Encrypted Data β†’ SYMMETRIC ENCRYPTION πŸ“‘ Port 443 (HTTPS) | All data encrypted with Session Key

πŸ” Asymmetric Encryption

Used for key exchange. Public key encrypts, Private key decrypts. Slower but secure for initial handshake.

⚑ Symmetric Encryption

Used for data transfer. Same session key on both sides. Much faster for ongoing communication.

How a Chatbot Connects (End-to-End)

Real-time Chat: Chatbots typically use Server-Sent Events (SSE) or WebSocket for streaming LLM responses token-by-token, providing a responsive user experience.

πŸ’» Browser React/JS Client HTTPS πŸ”’ TLS 1.3 API Gateway β€’ Auth β€’ Rate Limit β€’ TLS Term Backend API FastAPI/Flask β€’ Validate input β€’ Stream response β€’ Error handling πŸ€– LLM API OpenAI/Claude stream tokens SSE stream render UI HTTP/2 + SSE WebSocket REST + Stream Token-by-token streaming provides real-time responses (perceived latency ~100ms vs 5-10s for full response)

Connection Methods Comparison

Method Direction Use Case Chatbot Use
REST API Request β†’ Response Standard CRUD operations Non-streaming queries
SSE (Server-Sent Events) Server β†’ Client (one-way) Server push, streaming βœ… LLM token streaming
WebSocket Bidirectional Real-time chat, gaming βœ… Full-duplex chat
Long Polling Request β†’ Wait β†’ Response Fallback for old browsers Legacy support only

SSE Implementation for Chatbots

Server (FastAPI)

from fastapi import FastAPI
from fastapi.responses import StreamingResponse
from openai import OpenAI

app = FastAPI()
client = OpenAI()

async def stream_chat(message: str):
    """Stream LLM response as SSE events"""
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": message}],
        stream=True
    )
    
    for chunk in response:
        if chunk.choices[0].delta.content:
            token = chunk.choices[0].delta.content
            yield f"data: {token}\n\n"  # SSE format
    
    yield "data: [DONE]\n\n"

@app.post("/api/chat")
async def chat(message: str):
    return StreamingResponse(
        stream_chat(message),
        media_type="text/event-stream",
        headers={"Cache-Control": "no-cache"}
    )

Client (JavaScript)

async function sendMessage(message) {
    const response = await fetch('/api/chat', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({ message })
    });
    
    const reader = response.body.getReader();
    const decoder = new TextDecoder();
    
    while (true) {
        const { done, value } = await reader.read();
        if (done) break;
        
        const text = decoder.decode(value);
        const lines = text.split('\n');
        
        for (const line of lines) {
            if (line.startsWith('data: ')) {
                const token = line.slice(6);
                if (token !== '[DONE]') {
                    appendToChat(token);  // Update UI in real-time
                }
            }
        }
    }
}

Why HTTPS Matters

πŸ” Data Confidentiality

All messages between user and chatbot are encrypted. No eavesdropping on sensitive conversations.

πŸ›‘οΈ Data Integrity

Messages cannot be modified in transit. Prevents injection of malicious responses.

βœ… User Trust

Browser shows secure padlock. Users trust the chatbot with sensitive queries.

πŸš€ SEO & Performance

HTTPS is required for HTTP/2 which enables faster multiplexed connections.

⚠️ MITM Protection: HTTPS prevents Man-in-the-Middle attacks where attackers could intercept API keys, modify LLM responses, or steal user data.

Key Takeaways

  • HTTPS = HTTP + TLS: Encryption for all client-server communication
  • Asymmetric β†’ Symmetric: Use asymmetric for key exchange, symmetric for data transfer
  • Port 443: Standard HTTPS port, always use in production
  • SSE for streaming: Perfect for LLM token-by-token responses
  • WebSocket for bidirectional: When you need real-time two-way chat
  • TLS 1.3: Latest protocol, faster handshake, better security
  • API Gateway: Handle TLS termination, auth, and rate limiting at the edge

Learn More

Related Topics

Test Your Knowledge

Score 8/10 or higher to pass