G4
指南
Gemma 4 + Ollama 使用指南
使用 Ollama 运行 Gemma 4 的完整指南——安装、拉取模型、使用 REST API、通过 OpenAI SDK 集成 Python、创建自定义 Modelfile 等。
Ollama REST API OpenAI SDK Modelfile
1. 安装 Ollama
# Linux / macOS
curl -fsSL https://ollama.com/install.sh | sh
# macOS (Homebrew)
brew install ollama
# Windows: download .exe from ollama.com安装后,Ollama 作为后台服务运行。使用以下命令验证: ollama --version.
2. 拉取 Gemma 4 模型
# Pull the default 31B model
ollama pull gemma4
# Pull specific variants
ollama pull gemma4:e4b # Edge 4B — best for 8 GB VRAM
ollama pull gemma4:e2b # Edge 2B — runs on 4 GB VRAM or CPU
# List downloaded models
ollama list选择哪个版本
| Tag | VRAM | Speed |
|---|---|---|
gemma4:e2b | ~4 GB | Fastest |
gemma4:e4b | ~6 GB | Fast |
gemma4 | ~18 GB | Best quality |
Ollama 使用 GGUF 量化模型(默认 Q4_K_M)。
3. 通过命令行运行
# Interactive chat in terminal
ollama run gemma4
# Single prompt (non-interactive)
ollama run gemma4 "Explain the MoE architecture in Gemma 4"
# With a custom system prompt
ollama run gemma4 --system "You are a Python expert." "Write a FastAPI hello world"在交互模式下,输入 /bye to exit or /help to see commands.
4. REST API
Ollama 在以下地址提供 REST API: http://localhost:11434:
# The Ollama REST API runs at localhost:11434
# Generate completion
curl http://localhost:11434/api/generate -d '{
"model": "gemma4",
"prompt": "Why is Gemma 4 good for local deployment?",
"stream": false
}'
# Chat completions
curl http://localhost:11434/api/chat -d '{
"model": "gemma4",
"messages": [
{"role": "user", "content": "Hello!"}
]
}'POST /api/generate
Single completion
POST /api/chat
Multi-turn conversation
GET /api/tags
List installed models
5. 与 OpenAI Python SDK 配合使用
Ollama 在以下地址支持 OpenAI API 格式: /v1/ — use your existing OpenAI code with zero changes:
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:11434/v1",
api_key="ollama" # required but unused
)
response = client.chat.completions.create(
model="gemma4",
messages=[{"role": "user", "content": "What is Gemma 4?"}]
)
print(response.choices[0].message.content)6. 流式响应
import requests
import json
response = requests.post(
"http://localhost:11434/api/chat",
json={
"model": "gemma4",
"messages": [{"role": "user", "content": "Tell me a story"}],
"stream": True
},
stream=True
)
for line in response.iter_lines():
if line:
chunk = json.loads(line)
print(chunk["message"]["content"], end="", flush=True)7. 自定义 Modelfile
基于 Ollama 基础模型
# Create a Modelfile
FROM gemma4:e4b
SYSTEM "You are a helpful coding assistant specializing in Python."
PARAMETER temperature 0.7
PARAMETER top_p 0.9# Build and run your custom model
ollama create mygemma -f Modelfile
ollama run mygemma基于本地 GGUF 文件
# Use a local GGUF file
FROM /path/to/your/gemma4-q4_k_m.gguf
SYSTEM "You are a helpful assistant."从 Hugging Face 上的社区仓库下载 GGUF 文件,例如 bartowski/google_gemma-4-E4B-it-GGUF。
技巧与常见问题
性能优化技巧
- Set
OLLAMA_NUM_GPU=1to force GPU offloading - Use
OLLAMA_NUM_PARALLEL=4for concurrent requests - Keep context short — 2048 tokens is enough for most tasks
- E4B with Q4_K_M is the best quality/speed ratio under 8 GB
在网络上暴露 Ollama
OLLAMA_HOST=0.0.0.0 ollama serve然后从其他设备访问: http://<your-ip>:11434. Add authentication with a reverse proxy (nginx/Caddy) for production.