category
构建用户友好的AI聊天界面:使用Python顶级框架实现无缝交互、演示和测试
AI聊天界面概述
AI应用的聊天界面提供了与AI助手、本地和云端大语言模型(LLMs)以及智能工作流交互的前端界面。与流行的AI应用如iOS版Grok、Le Chat以及网页和移动端的ChatGPT类似,这些界面支持发送和生成文本、图像、音频或视频(多模态输入输出)。
如何用Python构建AI应用
开发者可以使用多种模型提供商API构建AI应用。流行的AI模型解决方案包括OpenAI、Anthropic、Mistral、Grok和DeepSeek。通常开发者使用LlamaIndex、LangChain、PydanticAI、Embedchain、LangGraph、CrewAI等平台构建这些AI解决方案和智能体。虽然命令行非常适合测试和调试这些应用,但需要直观的界面来与底层AI功能交互,进行真实用户测试或全球分享。
为什么使用Python库构建AI聊天界面?
- 输出流式传输:LLM的响应流式传输实现复杂,但多数UI构建平台提供便捷启用方式
- 聊天机器人动画:通过某些库可免费获得打字指示等动画效果
- 实时反馈:快速获取项目改进建议
- 性能监控:Chainlit使用Literal AI观测服务监控LLM性能
- 易测试性:提供便捷的测试和演示方式
AI聊天界面构建工具及用例
- 多模态AI:音频-文本转换、视觉问答、文档问答等
- 计算机视觉:视频会议中的文本/图像分类
- 自然语言处理:摘要生成、翻译、问答机器人
- 音频处理:语音合成、语音识别、音频分类
1. Gradio:构建测试和部署AI应用的UI
图片地址:https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3x8w6k3t3x2m6r…
Hugging Face开源的Gradio库能快速创建LLM、智能体和实时音视频应用的UI。特点包括:
- 快速启动:数行代码构建功能完备的界面
- 便捷分享:支持嵌入Jupyter Notebook或通过Hugging Face分享
- 生产部署:可永久托管在Hugging Face Spaces
- 自定义组件:创建可发布的自定义UI库
- 智能体支持:轻松创建文生图等界面
- 多用例处理:支持输入输出、纯输出、纯输入等四种接口类型
Gradio安装与快速入门
python -m venv venv
source venv/bin/activate
pip install gradio
示例代码(grok_ui.py)
import gradio as gr
import os
from dotenv import load_dotenv
# Load environment variables from .env file
load_dotenv()
# Function to handle chat interactions
def chat_with_grok(message, history, system_message, model_name, temperature, max_tokens):
# In a real implementation, this would call the Grok API
# For now, we'll just echo the inputs to demonstrate the UI is working
bot_message = f"You selected model: {model_name}\nSystem message: {system_message}\n
Temperature: {temperature}\nMax tokens: {max_tokens}\n\nYour message: {message}"
return bot_message
# Create the Gradio interface
with gr.Blocks(theme=gr.themes.Soft(primary_hue="blue")) as demo:
gr.Markdown("# Grok AI Chat Interface")
with gr.Row():
with gr.Column(scale=3):
# Main chat interface
chatbot = gr.Chatbot(
height=600,
show_copy_button=True,
avatar_images=("👤", "🤖"),
bubble_full_width=False,
)
# Message input
msg = gr.Textbox(
placeholder="Send a message...",
container=False,
scale=7,
show_label=False,
)
with gr.Row():
submit_btn = gr.Button("Send", variant="primary", scale=1)
clear_btn = gr.Button("Clear", variant="secondary", scale=1)
with gr.Column(scale=1):
# Model settings sidebar
gr.Markdown("### Model Settings")
model_dropdown = gr.Dropdown(
choices=["grok-1", "grok-2", "grok-3-beta"],
value="grok-3-beta",
label="Model"
)
system_message = gr.Textbox(
placeholder="You are a helpful AI assistant...",
label="System Message",
lines=4
)
with gr.Accordion("Advanced Settings", open=False):
temperature = gr.Slider(
minimum=0.0,
maximum=1.0,
value=0.7,
step=0.01,
label="Temperature"
)
max_tokens = gr.Slider(
minimum=100,
maximum=4000,
value=1000,
step=100,
label="Max Tokens"
)
# Set up event handlers
submit_btn.click(
chat_with_grok,
inputs=[msg, chatbot, system_message, model_dropdown, temperature, max_tokens],
outputs=[chatbot],
).then(
lambda: "",
None,
msg,
queue=False
)
msg.submit(
chat_with_grok,
inputs=[msg, chatbot, system_message, model_dropdown, temperature, max_tokens],
outputs=[chatbot],
).then(
lambda: "",
None,
msg,
queue=False
)
clear_btn.click(lambda: None, None, chatbot, queue=False)
# Launch the app
if __name__ == "__main__":
demo.launch()
2. Streamlit:构建云端AI应用
图片地址:https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2z8x3wq5s9r9j3…
import streamlit as st
import ollama
import time
def stream_data(text, delay: float=0.02):
for word in text.split():
yield word + " "
time.sleep(delay)
# Input for the prompt
prompt = st.chat_input("Ask DeepSeek R1")
# Initialize chat history in session state if it doesn't exist
if "messages" not in st.session_state:
st.session_state.messages = []
# Display chat history
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
if prompt:
# Add user message to chat history
st.session_state.messages.append({"role": "user", "content": prompt})
# Display input prompt from user
with st.chat_message("user"):
st.markdown(prompt)
# Processing
with st.chat_message("assistant"):
message_placeholder = st.empty()
full_response = ""
# Stream the response with a spinner while waiting for the initial response
with st.spinner("Thinking...", show_time=True):
response = ollama.chat(
model="deepseek-r1:8b",
messages=[{"role": m["role"], "content": m["content"]} for m in st.session_state.messages],
stream=True # Enable streaming if supported by your ollama version
)
# If streaming is supported
if hasattr(response, '__iter__'):
for chunk in response:
if chunk and "message" in chunk and "content" in chunk["message"]:
content = chunk["message"]["content"]
full_response += content
message_placeholder.markdown(full_response + "▌")
message_placeholder.markdown(full_response)
else:
# Fallback for non-streaming response
full_response = response["message"]["content"]
# Simulate streaming for better UX
for word in stream_data(full_response):
message_placeholder.markdown(full_response[:len(word)] + "▌")
message_placeholder.markdown(full_response)
# Add assistant response to chat history
st.session_state.messages.append({"role": "assistant", "content": full_response})
3. Chainlit:构建对话式AI界面
图片地址:https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7mz5w3k9r3b6j3…
import chainlit as cl
import litellm
@cl.on_message
async def on_message(message: cl.Message):
msg = cl.Message(content="")
await msg.send()
system_message = {
"role": "system",
"content": """You are an advanced AI assistant powered by the deepseek-r1:8b model.
Your strengths:
- Providing clear, accurate, and thoughtful responses
- Breaking down complex topics into understandable explanations
- Offering balanced perspectives on questions
- Being helpful while acknowledging your limitations
Guidelines:
- If you're uncertain about something, acknowledge it rather than making up information
- When appropriate, suggest related questions the user might want to ask
- Maintain a friendly, respectful tone
- Format your responses with markdown when it improves readability
"""
}
response = await litellm.acompletion(
model="ollama/deepseek-r1:8b",
messages = [
system_message,
{"role": "user", "content": message.content}
],
api_base="http://localhost:11434",
stream=True
)
async for chunk in response:
if chunk:
content = chunk.choices[0].delta.content
if content:
await msg.stream_token(content)
await msg.update()
其他Python框架
- Messop:Google团队使用的快速AI应用构建工具
- FastHTML:纯Python现代Web应用框架
- Reflex:全栈Python框架,支持AI图像生成
总结
选择AI聊天界面框架需综合考虑功能、扩展性、集成能力和部署选项。Gradio适合快速原型开发,Streamlit擅长数据可视化应用,Chainlit专注企业级AI解决方案。开发者可根据项目需求选择最适合的工具,或结合Stream等平台的预置组件快速集成AI助手功能。
- 登录 发表评论
- 2 次浏览
最新内容
- 1 hour ago
- 3 hours 48 minutes ago
- 3 hours ago
- 5 hours ago
- 6 hours ago
- 6 hours 21 minutes ago
- 3 weeks 2 days ago
- 3 weeks 3 days ago
- 3 weeks 3 days ago
- 3 weeks 6 days ago