| 开发者 | duetg |
|---|---|
| 更新时间 | 2026年4月17日 22:07 |
| PHP版本: | 7.4 及以上 |
| WordPress版本: | 7.0 |
| 版权: | GPL-2.0-or-later |
| 版权网址: | 版权信息 |
duetg-ai-connector folder to the /wp-content/plugins/ directoryTo enable debug logging, add the following to your wp-config.php:
define('DUETGAICON_DEBUG', true);
When enabled, debug information will be written to your server's debug log (usually wp-content/debug.log). This includes:
不需要。此插件需要 WordPress 7.0 或更高版本,因为它使用内置的连接器 API 来管理 API Key。
使用评审笔记时,您可能会注意到 AI 返回的建议数量与编辑器中显示的笔记数量不完全匹配。 This is expected behavior and has two causes:
review_type: "seo, accessibility"). The plugin preserves these as-is, so one suggestion may appear under multiple note categories in WordPress AI Client.http://localhost:11434/v1http://localhost:1234/v1https://api.minimax.io/v1https://api.moonshot.ai/v1https://api.deepseek.com/v1https://api.siliconflow.cn/v1部分提供商需要 API Key。对于不需要身份验证的本地安装(如 Ollama),您可以输入任意字符串(如 "not-required")作为 API Key。
在 Ollama 上运行的本地推理模型(如 Gemma 4、QwQ 等)会在生成最终答案之前产生长的"思考"链。这个过程可能需要 30-60 秒或更长时间,可能触发 cURL 的低速限制超时(默认 30 秒)。 Cloud models generally work well - most cloud API providers (DeepSeek, MiniMax, Moonshot, etc.) respond quickly without timeout issues. If a cloud model frequently times out, it may have unusually long thinking chains - try switching to a different model. Recommended solutions for local models:
qwen2.5:7b, llama3.2:3b, or phi3 work well without the timeout issue.bash
export OLLAMA_KEEP_ALIVE=-1 # Keep model in memoryBy default, WordPress blocks requests to localhost and private IP addresses for security (SSRF protection). If you're using a local AI provider, you can disable this protection by adding to your wp-config.php:
define('DUETGAICON_ALLOW_LOCAL_URLS', true);
Warning: Disabling SSRF protection allows requests to private/local IPs. Only enable this if you trust your local AI provider and your server is not directly accessible from the internet.
此设置在使用本地 AI 提供商时同时适用于文本模型和图片模型。
Tip: When DUETGAICON_ALLOW_LOCAL_URLS is enabled, a Network Connectivity Test tool appears on the Test AI page (Tools > Test AI). You can use it to verify that your WordPress server can reach your local AI provider before running actual AI feature tests. This is especially useful for debugging connection issues with local Ollama or LM Studio installations.
use WordPress\AiClient\AiClient; $registry = AiClient::defaultRegistry(); // Text Generation $model = $registry->getProviderModel('custom_text', 'gpt-4'); $result = $model->generateTextResult([ new \WordPress\AiClient\Messages\DTO\UserMessage([ new \WordPress\AiClient\Messages\DTO\MessagePart('Your prompt here') ]) ]); echo $result->toText(); // Image Generation $model = $registry->getProviderModel('custom_image', 'dall-e-3'); $result = $model->generateImageResult([ new \WordPress\AiClient\Messages\DTO\UserMessage([ new \WordPress\AiClient\Messages\DTO\MessagePart('Your prompt here') ]) ]); $files = $result->toImageFiles();