API Key Configuration
Configure your AI model API Key to unlock code execution, AI code assistant, and smart chat features on this site.
🔒 Privacy Notice
All API Keys are stored only in your browser's localStorage
This website never uploads, collects, or stores any of your API Keys
Configure Model
Select a provider to auto-fill the model list and Base URL. You can also edit them manually. The site's AI features (code assistant, smart chat, code explanation) will use your configured model.
⚠️ No API Key configured yetPlease configure at least one API Key
Supported Models
| Provider | Base URL | Common Models |
|---|---|---|
| OpenAI | https://api.openai.com/v1 | gpt-4.1-nano · gpt-4.1-mini · gpt-4.1 · gpt-4o · o3-mini |
| DeepSeek | https://api.deepseek.com | deepseek-chat · deepseek-reasoner |
| Anthropic | https://api.anthropic.com | claude-sonnet-4-5 · claude-haiku-4-5 · claude-opus-4-5 |
| Kimi (Moonshot) | https://api.moonshot.ai/v1 | kimi-k2 · moonshot-v1-8k · moonshot-v1-128k |
| Qwen (Tongyi) | https://dashscope.aliyuncs.com/compatible-mode/v1 | qwen-max · qwen-plus · qwen-turbo |
| Zhipu GLM | https://open.bigmodel.cn/api/paas/v4 | glm-4-plus · glm-4 · glm-4-flash |
| SiliconFlow | https://api.siliconflow.cn/v1 | DeepSeek-V3 · Qwen2.5-72B · DeepSeek-R1 |
How to Get API Keys
• OpenAI: Visit platform.openai.com
• DeepSeek: Visit platform.deepseek.com
• Anthropic: Visit console.anthropic.com
• Kimi (Moonshot): Visit platform.moonshot.cn
• Qwen (Tongyi): Visit dashscope.console.aliyun.com
• Zhipu GLM: Visit open.bigmodel.cn
• SiliconFlow: Visit cloud.siliconflow.cn
• DeepSeek: Visit platform.deepseek.com
• Anthropic: Visit console.anthropic.com
• Kimi (Moonshot): Visit platform.moonshot.cn
• Qwen (Tongyi): Visit dashscope.console.aliyun.com
• Zhipu GLM: Visit open.bigmodel.cn
• SiliconFlow: Visit cloud.siliconflow.cn
Verify API Key
After saving your API Key, run the following examples to verify your configuration.
Example 1: Streaming Output Test (OpenAI)
python
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
stream = client.chat.completions.create(
model="gpt-4.1-nano",
messages=[{"role": "user", "content": "Describe one advantage of Python in one sentence"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
print()Example 2: Using DeepSeek
python
import os
from langchain_deepseek import ChatDeepSeek
llm = ChatDeepSeek(
model="deepseek-chat",
api_key=os.environ.get("DEEPSEEK_API_KEY")
)
response = llm.invoke("Explain machine learning in one sentence")
print(response.content)Example 3: Using Anthropic Claude
python
import os
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(
model="claude-sonnet-4-5-20250514",
api_key=os.environ.get("ANTHROPIC_API_KEY")
)
response = llm.invoke("Explain causal inference in one sentence")
print(response.content)Example 4: LangGraph Simple Graph
Create a simple LangGraph state graph for AI Q&A:
python
import os
from langgraph.graph import StateGraph, START, END
from langchain_openai import ChatOpenAI
from typing import TypedDict
class State(TypedDict):
question: str
answer: str
def answer_node(state: State):
llm = ChatOpenAI(
model="gpt-4.1-nano",
api_key=os.environ.get("OPENAI_API_KEY")
)
response = llm.invoke(state["question"])
return {"answer": response.content}
graph = StateGraph(State)
graph.add_node("answer_node", answer_node)
graph.add_edge(START, "answer_node")
graph.add_edge("answer_node", END)
app = graph.compile()
result = app.invoke({"question": "What is LangGraph? Explain briefly."})
print(result["answer"])Switching Models
- Tutorial code uses OpenAI models by default
- To use other models, modify the model name in the code as shown above
- The system automatically injects your configured API Key via environment variables
Start Learning
After configuration, begin your learning journey:
- Python Fundamentals — From basics to data analysis
- StatsPai - Statistics — Statistical inference & causal identification
- AI for Research — LLM, RAG, agents
- ML & Causal Inference — Predictive modeling & causal effects